Internet came with many new concepts which mankind uses for its comfort, enjoy, approximately for everything. Nowadays itβs easy to reach our family friends which are sitting far away with latest social networking applications.
But itβs just the user side we know, for an application developer there are whole lot of challenges like managing fluctuating internet traffic to his application or website as we are reaching new scalability limits. Sometimes traffic jumps from thousands to millions so system developers and architects have to scale its application by taking benefit of cloud resources.
Cloud provides quick resource allocation and de-allocation on unsystematic demand and this feature of cloud makes it perfect for scalable applications. Thatβs not it, all phases of a application can be accommodated by cloudβs resource and infrastructure.
Scalable architecture allows us to test our application under real world and scale according to the requirement. Unpredictable traffic can put whole new level of trouble to system in every way. A scalable application adapts to these vigorously changing environments and promotes trustworthiness and availability of a service.
Scalable server array is associated with this second tier of the reference architecture. In initial stage this tier is configured with two servers (in different availability zone) and automatic scaling alert mechanism in place with instance specific metric. System load, free memory, CPU idle are the most general metrics used for auto scaling.
When threshold level mentioned by metrics are met a conventional alert is initiated and auto scaling gets started. The up or down direction of scaling depends on. For cost cutting in the early stage of applicationβs life cycle, front end load balancers may be united with application servers to save the cost & expenditure on infrastructure and then after some time these can be segregated.
Its aim is to increase performance. But it cannot be used for every application, for example: read concentrated applications can get big performance gains as it reduces processing time and accessing of data ,Check the following list of cheap dedicated linux dedicated server. write concentrated applications might not get as good gains.
Mostly cache uses a lesser amount of CPU but uses large memory. You must use big instances (memory) for servers in this tier. In the early lifecycle phase of an application, cashing requirement is less so u might need to use only one instance to give cache for whole application server. In normal conditions or better condition you must increase instances.
A single caching server could go off at any point which will put lot load on performance of application. So you must use minimum two instances in cashing tier on different zones. As the usage increases a buffer of extra caching capacity must be added. When cashing servers increase hashing algorithm is used by application server to map to correct cashing server.
Another feature that is used in this tier is TTL. Time to live (TTL) it is used in cashing server to time out the saved data. This feature allows us to available memory for new data and deletes inconsumable memory. For applications with less amount of cashing needed can co reside cashing on application server as this will also save cost.
Click on their website www.mileweb.com/public-cloud for more information.