With all new technologies in place, we still see latency across the board with medium scale servers and at times large scales too. Competitive business in the IT domain demands continuous server service. The present load balancers do balance the incoming load. But, has a threshold in balancing the load.
Identifying a load controller and placing the device in the CLOUD infrastructure/ or for an independent server and configuring a logical way to streamline the ingress load (ensuring the source is alerted) and kept the queue to avoid further damage will ensure latency is avoided.
With versatile load balancers and elasticized server methodologies for an application, we still witness and discuss about throughput/latency issues caused by an ad hoc load across the physical /cloud servers.
Having critical sources invoking a complex server/application, an unpredictable load from various interfaces, the elasticity of server/application will always be in stake. Identifying a solution within the cloud of AZ or AWS or IBMs, to keep a low-latency server for any burst of loads thus ensuring a low-latency business is inevitable.
I do have an idea but would need a skillful community to take it up and think through it. Google cloud would be the best platform to think through it. May I know how to submit the outline of it please.
Depending on how big you want go with this idea you may want to check out https://startup.google.com/