Re: [HSSG] Higher speed trade offs
Stephen,
Regarding the FEC & latency - a FEC that is designed to exploit the
transverse dimension (i.e. correlation between channels) would not need
to add significant latency. The FEC block size (or equivalent) need only
be the same order as the maximum skew between the channels. This will
govern the minimum latency for a non-FEC channel in any case. At its
simplest, a Trellis code could be applied across the channels with an
additional latency of ~ 1 code block. A cleverly designed maximum
likelihood code (is anybody in Alberta or Cork working on that? :-)
could offer similar gain with lower overhead. In particular, the optical
channel with binary signaling offers a much smaller problem matrix than
1000BASE-T multi-level FEXT/NEXT channels.
The processing overhead is largely dependent on the amount of state and
to a first approximation would scale with latency.
Hugh.
Stephen Bates wrote:
> Hi Hugh and others
>
> I have been following this mailing list with interest and wanted to
> comment on Hugh's statement in his email.
>
> "It strikes me that if the sources and destinations of many carriers
> are co-located and correlated then coding can eliminate inter signal
> interference."
>
> I am trying to understand what the advantage is of merging 10 10G
> channels into one 100G channel versus keeping the 10G channels
> separate. It seems to me all the buffering and SAR requirements
> required are only of value if we do take advantage of all the
> dimensions. Obviously this is something we've done in 1000BASE-T and
> 10GBASE-T by running some kind of FEC over all four dimensions.
>
> However we've already had a discussion on how FEC adds latency and
> that may not be acceptable in short-haul applications. Also, decoding
> a 10 dimensional code would not be trivial, though the potential
> coding gain would be large, allowing dense packing of wavelengths.
> Also, if there is significant correlation across the
> dimensions/wavelengths we can take advantage of that using
> maximum-likelihood detection techniques. Again the complexity and
> latency become issues. However the maximum likelihood approach is
> interesting in that it can be utilized without compensating for any
> bulk skew mis-match between the dimensions/wavelengths.
>
> I look forward to seeing how this work develops.
>
> Cheers
>
> Stephen
>
> ------------------------------------------------------------------------
>
> Dr. Stephen Bates PhD PEng SMIEEE
>
> High Capacity Digital Communications Laboratory
> Department of Electrical and Computer Engineering Phone: +1 780 492 2691
> The University of Alberta Fax: +1 780 492 1811
> Edmonton
> Canada, T6G 2V4 stephen.bates@xxxxxxxxxxxxxxx
> www.ece.ualberta.ca/~sbates
> ------------------------------------------------------------------------
>
> Hugh Barrass wrote:
>
>> Andrew and others,
>>
>> It often amuses me that technical principles from one field of
>> invention seem to leak into other fields. The mechanism that you
>> suggest strikes me as very similar to Discrete Multi-Tone modulation,
>> used in DSL. There are some considerable advantages of compact multi
>> carrier systems over higher baud rate single carrier systems. I guess
>> it's only a matter of time before someone comes in (or back) with
>> optical multi-level signaling to make the matrix complete :-)
>>
>> Not being an optical expert allows me the freedom to look at this
>> from the outside and to suggest some ideas that may (or may not) be
>> completely hopeless. Has anyone considered the use of FEC codes
>> designed to correct errors caused by ultra-fine WDM spacing? It
>> strikes me that if the sources and destinations of many carriers are
>> co-located and correlated then coding can eliminate inter signal
>> interference.
>>
>> Isn't communications theory fun? :-)
>>
>> Hugh.
>>
>