This discussion is interesting in that on one hand we're conversing about 50G which is bleeding edge today but then we're shooting for aggregated bandwidth links in 3-5 years that show no sign of being on the edge.
50G serial (or even 100G serial) is interesting as a server link if it provides better economics that the existing solution. The same applies to uplinks. If 200G is economically competitive compared to 100G or 400G or 1.6T, then it will gain traction in the market. But it's not just the cost of the optical module, it's the cost of the whole ecosystem. The uplink bandwidth has an impact on FIB which has an impact on switch memory requirements.
There are a couple of factors that I believe need to be considered: the laws of physics (how much bandwidth can we put down a single lane) and the laws of economics (how do you make sure there's sufficient market to justify the solution). When Ethernet operated at the 10x speed increments, it was much simpler to ensure the laws of economics were being met. Does 200G satisfy the laws of economics? Does 800G satisfy it?
All of this is directly impacted by the time it takes to create a standard. Is it two years? Three years? Or four years? Or, would it be wiser for the working group to reconsider how it does projects? Should we look at a project that decouples the speed of the MAC (which only takes mere seconds to change for each new project speed) from the speed of the PHY (which, as we all know, is where the lion's share of the work occurs)? This could permit the speed of the MAC to merely be an aggregate of similar speed PHYs in a base2 scale (1, 2, 4, 8, 16, etc.).
Just food for thought,
Brad