Same here. Compared to IBM explanation this is much better but still I'm not fully convinced. For an example a traditional application can be designed as stateless and still able to share the 'shopping cart' information among the servers via session replication if the servers are in a cluster. With this it doesn't necessarily need to use sticky sessions and will be fault tolerant as much as a cloud application.
I was so confused when I saw the author of this video, thinking "wait, don't I watch him for skydiving advice?" but funnily enough I watch these kinds of videos all the time as a computer science student
Great overview of the reasons for re-architecting to cloud-native. Many thanks - it really helped me to understand the justification for CNA architecture. Just a slight comment though to remove any confusion (and its certainly not central or important to your focus). At around 5:40, you refer to VMware vMotion as a failure recovery technology (used for orchestrating VM recovery when a VM dies)... It is VMware HA that carries out cluster-based recovery in vSphere. vMotion is used together with DRS to live-migrate continually running VMs (providing contention avoidance), so isn't a recovery technology.
in tradition apps , we have in-memory session replication option in middleware servers to replicate sessions among other servers so that the session wont be lost when a server is down and also if required we can disable sticky session at LB level.
True - server sticky can't be the _only_ thing to make it cloud native. Here's a different take that supports that point with different tech tricks. First of all we don't use sticky at our on-prem and have had stateless web since asp 3.0! On the other hand we have apps that would have to deserialize their objects from the cache or db server before processing requests in stateless mode, and lose 100ms in the process ... so just turn on server sticky and leave the objects alive in session, spit out the results to the client and keep a secure token client side, and then queue the cached/modified session object tree for serialization/saving to the db as the request is processed. Then on the next request, we look up the token in the cached docs in the session, If sticky works, your old objects are right there in the session, no deserialize! Otherwise if the server died (or AWS killed it or has to rebalance the load) the web app doesn't find the cached objects, takes the time to deseriazlize from the db/cache server, and the rest of the app behaves as normal. This is literally one IF statement in code (plus obviously token management, and starting with architecture that supports this).
Al Bud , very nicely explained. Thanks! I have lately gotten into Appian (BPM) programming and having come from a traditional SQL/ETL/BI world with very little hands on experience in web development, developing in Appian is new and exciting to me since the BPM tool's architecture hides most of the complexities that come with the traditional web application development. I am trying to gain more knowledge on what used to be and what is to come in the web application development field so I can be more prepared to field challenges that might come about in my new career path.
I think, from most of the coments in this video we can see how this is such a nice "breath of fresh air"... Sometimes, especially in the topic of software development it's almost Impossible to find good explanations to a concept and not just a crappy, generic, almost advertising-like vídeo... Like that IBM vídeo wich is a bunch of bullshit talk about how everyone should be using cloud native architecture and never getting to the point of why and what it is... We love straight to the point explanations and documentation, and not stupid waste of time videos and adverstisement as I see a lot in the software engineering community nowadays. Thank you sir!
maybe I missed an explanation: why in cloud apps the load is immediately shared when adding VMs? Is it that all VMs share the state, but only one of them is processing the traffic at any given time?
Ah I think the confusion is the term "load" itself. It doesn't refer to raw CPU load, it refers to HTTP requests. With a stateful applications, a user's HTTP requests are tied to a single host. If the host goes down, then the session has to restart, causing a negative user experience (required to log in again, shopping cart empty, etc.) With a shared-state application, the HTTP requests from a user can be sent to any hosts in the cluster. If a host goes down, the user doesn't notice.
In the cloud native apps example are those instances in each cart? If so, are those instances consuming resources on all three servers? In the traditional example the cart is only on one server. So the traditional example is only consumer 1/3 the resources(of the 3 servers) compared to the cloud native. What am I missing?
Not sure why he showed that way..typically in these apps session info would be stored outside the app server in database or redis..like spring session jdbc
You can use very simple Layer 4 load balancers for cloud-based applications. For traditional applications, you'll need a Layer 7 load balancer with something like cookie-persistence. You don't need persistence if an app is written "cloud-based" style.
is it possible to share the session among the servers in the traditional dc? if so, we can achieve the same level of elasticity by turning on vms on demand and evenly distribute the traffic among all the servers for existing and new users.
It's possible, but it requires a partial re-write of the application. In many cases, it's not done because 1) the app was purchased from a company that won't do it or 2) the scaling requirements (or rather, lack of scaling requirements) are not such that there would be an immediate benefit.
You must break your business process into events and implement each event in a web service and must go for container based solution. Please note that containerisation or micro service cannot be achieved without redundancy.
utkarsh agrawal increase in Internet network infrastructure, Availability of Cloud service providers, reduction in cost of server storage infrastructure
Because users like us don't want to see websites going down or running slowly..to achieve always on and autoscaling (so that instances come and go,increase/decrease dynamically based on load) we need to treat the app server as stateless and store session outside
Thanks pal, very informative video. You mentioned about building applications that are cloud native and compatible with "shared state " loadbalancers. Can you reference such frameworks especially for java web applications ? Many thanks !
They're called stateful services which are basically key/store databases used by the stateless servers that the app has access to. Like instead of your app request, which has a session token, being checked by the server that the load balancer redirected it to, the server which is stateless (does not store the users' data) will check if there's a match between your request's token and a stored token on the stateful service. Every server does that making the stateful service their single source of truth. Of course you can scale this architecture by having multiple stateful services if there's a need for that and adding a load balancer between them but that's beyond what you're asking about. I recommend reading Cornelia Davis' book 'Cloud Native Patterns'.
Hi Diego. Cloud Native apps are an evolution from how apps used to be engineered. It's mostly used on IaaS, as most PaaS have this inherently built in I believe. The lack of a persistence/server sticky requirement is probably the primary differentiating trait.
Hi Milad, containers are separate from cloud native. You can have cloud native apps with VMs (Netflix does this). However, most apps that run on K8 are "cloud native" because of the ephemeral nature of the containers. It would be difficult I think to made a stateful app with K8 and have it reliable in any way.
@@shadeland Hi Tony, Thanks for getting back to my comment :). I'm indeed new to all this with no virtualization background. Have you done a deep dive on this so that I can watch and learn more? What I'm trying to get is how the servers in a cloud-native environment synch with each other and provide a seamless availability and scalability?
There's a variety of methods. It could be something along the lines of keeping state in the back-end DB or having a distributed key store across web/app hosts, but there's many ways.
Agree with the second part talking about elasticity, however, the first part regarding stateless, or non-sticky apps is not that clear and obvious IMO. Micro-service + K8s is the core of cloud-native.
Here are my two cents from what I've read about the subject: 0- Invest in reading about Cloud Native patterns for example I highly recommend Cornelia Davis' book "Cloud Native Patterns". 1- Transition from monolithic to microservices based App architecture 2- When developing your microservices based App, which you've obtained from step 1, make use of DevOps automated Continuous Integration and Continuous Delivery Pipelines (Lots of open source free tools offer this such as Openshift and Cloud Foundry, ...) 3- Make sure you're opting for a container based deployment pipeline. For example, Docker containers are on top of the container world and Kubernetes is the big boss of Docker containers orchestration. If anybody has any comments regarding what I've just shared please make sure you reply because I'm a student and I'd love to have better understanding of the inner workings of the Cloud Native world.
Good Explanation. But the last point, when you explain that scaling up and down is no problem anymore, dont shows me that the benefit of Loadbalancing remains. Here in the newer approach all servers get all the requetsts/data.
removing stickiness to support elastic load balancing seems logical but it leads to problem of not being able to execute actions on given shopping cart / any entity in sequence since multiple user actions on same cart could go to different server. This could be very important in many cases. Also, any attempt to synchronise access to shared data impacts performance severely. In my experience of trying to solve this in recent years, middle ground is to keep stickiness for old requests ( short life span ) but route new requests across all servers.
Hi Amrish, with cloud native application-style apps, you can still have a shopping cart or other stateful behavior, it's just that the state is stored on the back-end or its shared across all nodes. So if request is sent to a different server, that server *also* has the shopping cart information. When applications are architected that way, stickiness is not needed on the load balancer. Synchronizing state locally has no performance impact, it's only when syncing databases/data structures over long distances does performance become impacted (because of the round-trip-time). Applicaitons that are written in this way look the same to the end user, but they are much more flexible operationally (being "elastic", scaling up and down, etc.)
@@shadeland yes synchronisation solves the race condition problem and does not impact latency much in most of the cases. This however becomes an issue in the area of very Low-Latency processing requirement where we want to keep design lock-free and highly concurrent.
No. I believe that for an application to be labeled as cloud native it has to have a microservice architecture, be stateless, be the product of a CI/CD pipeline and be able to natively use the cloud features such as auto scaling, high availability,... Etc
Useful video but the first part (shared storage for traditional apps), didn't really make it that convincing, since stateful applications do not have to store their data locally - why can't they store the user status (logged in, items in shopping cart) on a shared SAN? The second part (scaling out vs scaling up) is more convincing. Are there no other compelling reasons, aside from being non-sticky, for Cloud Native architecture?
Good explanations, but.. your 'pets' explanation doesnt cover clustering of the a server ie multiple prod servers synchronized to fail over to a contingency environment that is in warm stand by... so that 'cart' persistent data would actually exist in the failed over server you mention.
Thanks Nick for this great video! According to your explanation, the key feature of "cloud native" application is that they're stateless? It seems to me the "stateless" feature is not directly relevant to the "cloud" as the term "cloud native" implies.
Huabing Zhao the applications aren’t stateless, though that’s a common name for them. Almost every application has some state, older applications have singular state on the application server side. If that server goes down, the session state is lost. One of the defining traits of cloud native is the state is shared across app servers, so if one goes down the customer doesn’t notice, becasie their session is available on any sever. You can scale on demand that way, where with stateful/singular state apps it’s disruptive to scale up or down in terms of nodes.
@@shadeland It makes sense. Shared state allows "cloud native" applications scale up and down without impacts to the client sides. So besides this, are there any other important features you think an application must have to make it "cloud native"?
Huabing Zhao typically they’re updated very frequently (weeks, days, hours) in stead of infrequently (months, years). Unit testing is typically automed (such as Jenkins). Deployment/updates are usually automated making them portable without relying on vmotion or OVAs. Check out 12factor.net for some more aspects.
Tony, your Shopping cart analogy is poor. How do you manage database lock on rows that you have just read for update in cloud computing if you run the same transactions on multiple application / web server? Your explanation is good though but and Cloud based transactional application is good only if you are reading the data objects from cloud storage or block storage (SAN), not for update / database commit. Your approach is not the solution for all business scenarios. If you follow CAP theory of database, you can only achieve eventual consistency in cloud, and if you need strict consistency on the data in the cloud native application, you must maintain or enable stateful. What I expect from the system is that, once I put an item in shopping cart, it should be available until timeout or complete my transaction. Thanks.
The shopping cart analogy is a class one used in the load balancing world for about 20 years now. There are multiple ways to provide for immediate consistency. Nothing I talked about here necessitates having inconsistent database writes. Shared in-memory sessions, and string session tables on a database are just a few ways to do this. State is always stored somewhere. With cloud native applications, the state is not dependent on the survival of a single node. With traditional enterprise applications, if a single app server goes down, there's some lost state. With cloud applications, the loss of any node is non-disruptive because session state is not stored on just one node.