Тёмный

Should You Run An Entire Web Application in AWS Lambda? 

James Eastham
Подписаться 3,8 тыс.
Просмотров 1,7 тыс.
50% 1

To Lambda-lith or not to Lambda-lith, that is the question!
Hi, I'm James. And in this video I'm going to attempt to answer that question for you. You will learn about the pros and cons of building single purpose Lambda handlers vs running an entire web application on Lambda. Whether that be with ASP.NET, Java or Rust.
You'll learn, why the decision extends beyond the simple time it takes to cold start. And you'll get into the nuance of when it is, and isn't, the correct pattern to choose.
00:00 - Introduction
00:30 - A Change in Opinion
01:10 - Cold Starts, a problem?
02:30 - Let's get data driven
03:40 - Are Lambda-liths more performant?
05:30 - How to deploy a web app to Lambda?
07:30 - Pros and Cons of the Lambda-lith
08:30 - Optimize specific endpoints
10:26 - Portable applications with serverless
11:30 - BE DATA DRIVEN!!!
12:25 - Future Videos on this channel
Links
Zero2Production Rust with Serverless - github.com/jeastham1993/zero-...

Наука

Опубликовано:

 

16 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 31   
@lechegaray
@lechegaray 5 месяцев назад
Great to see more RU-vid videos about lambda-liths, it’s a great default approach that your dev teams won’t hate immediately ;)
@serverlessjames
@serverlessjames 5 месяцев назад
100% agree. More content about the lith coming in the next few weeks. Thanks for the comment.
@MarianoGomezBidondo
@MarianoGomezBidondo 4 месяца назад
I love your content and i love your english pronunciation! I'm from argentina and your English is one of the easiest to understand. Also your content is very advanced and simple to understand. Thanks a lot!
@serverlessjames
@serverlessjames 4 месяца назад
Kind words indeed, thankyou ☺️
@woper522
@woper522 5 месяцев назад
thank you very much, James!... this is definitely the content I was looking for... I and a couple of friends are trying to run a project and your videos are totally a match... of course we all are subscribed, please keep going!!!
@woper522
@woper522 5 месяцев назад
tomorrow I definitely give you a tip, as soon as my card is approved
@serverlessjames
@serverlessjames 5 месяцев назад
I'm glad you've found the content useful, thanks 🎉 any follow up questions feel free to reach out.
@DanielGraham655t
@DanielGraham655t 5 месяцев назад
Thanks for the video!
@serverlessjames
@serverlessjames 5 месяцев назад
Thankyou 🫂♥️ I'm glad you liked it.
@RDarrylR
@RDarrylR 5 месяцев назад
Great Video! I totally agree that this is a better approach in many cases.
@serverlessjames
@serverlessjames 5 месяцев назад
Glad you liked it!
@bobby_mathews
@bobby_mathews 5 месяцев назад
Great Video James.
@serverlessjames
@serverlessjames 5 месяцев назад
Thanks 👍
@paulgledhill9209
@paulgledhill9209 5 месяцев назад
Super Video. Great subject to be talking about - and the production quality is fantastic. Keep it up! That said.... I'm on the Go journey myself. You've prompted me to buy Luca's book. Let's see if I change direction! 🙂
@serverlessjames
@serverlessjames 5 месяцев назад
Ask Ben Pyle his opinion, I'd guess he would take Rust over Go now. It's a brilliant language.
@Michael-sm3te
@Michael-sm3te 5 месяцев назад
Thank you for the video overview. I'd like to add that in addition to 1 lambda per endpoint (API gateway being the router), or one function per API, and the function being the router, there's a third option. consider this endpoint structure in API gateway: \ \authenticated \{model} ANY >> authenticatedLambda \unauthenticated \{model} ANY >> unauthenticatedLambda Here, {model} is a dynamic resource, e.g., /authenticated/users/:id, or /unauthenticated/terms, and API gateway proxies invokation events to lambda. This approach provides: - few lambdas - shared code and state between models - dynamic API update without touching API gateway - ability to manage role based access, data models, request/respose models etc. in lambda (...) This doesn't require any framework or tools and can be implemented with simple NodeJS functions.
@serverlessjames
@serverlessjames 5 месяцев назад
When you say this doesn't require any frameworks in the functions, are you manually implementing the routing then? So if someone calls GET \authenticated\users and POST \authenticated\terms you have manually written all the routing logic?
@Michael-sm3te
@Michael-sm3te 5 месяцев назад
Hi @serverlessjames, "routing" is not necessary for my REST API example. I see an API request as a simple asynchronous function call. Lambda is a function with a parameter ("invokation" event). API Gateway forwards all necessary details including a resolved cognito user or identity to the Lambda handler (JSON). In my handler, I simply extract method (e.g. "GET"), params ("model", "id", etc.), body, and requestor from the invokation event. each "model" (=endpoint) then matches a subfolder in the lambda each subfolder has a "config.json" that defines the valid request format, and an "action.js" file with the business logic. (I also have some shared folders for shared components, for example, a dynamodb connector, a cognito connector, etc.) any request that doesn't match a model is denied, and any request not validating against a config.json returns a detailed error response. I can use the config.json for basic access control using AWS cognito groups with this pattern, I can easily define the output model based on the input model. e.g., "POST /events" can return individual results based on requestor/role, combine data from different sources, enrich results, or orchestrate side effects. it dramatically simplifies the frontend adding a new endpoint to the API is simply adding another "model" folder with action and config. I put this together when I first looked at AWS Lambda in 2016 and have been using it for production web apps for the past 7 years, works great. I also use lambda for server side rendering, with the NodeJS function using ReactJS to return rendered HTML instead of JSON, using the same pattern.
@serverlessjames
@serverlessjames 5 месяцев назад
Interesting, seems a valid approach. But you have built a custom routing framework inside your function...? Albeit routing based on a custom model and function folder that comes in.
@Michael-sm3te
@Michael-sm3te 5 месяцев назад
​@@serverlessjames here's my handler logic for this example: // simplify invokation event const request = map_request(event); // validate request based on config const [is_valid, validation_errors] = validate_request( request, model_configs[request.params.model], ); if (!is_valid) { return response_error( null, "request validation error", validation_errors, ); } // execute request const execution = actions[request.params.model][request.httpMethod]; const result = await execution(request); return { statusCode: 200, headers: { "Access-Control-Allow-Origin": "*", "Content-Type": "application/json", }, body: JSON.stringify(result), }; I am using ajv to validate the json models in validate_request(). const execution = actions[request.params.model][request.httpMethod]; I import all actions from all subfolders into actions upfront to simplify. I used to use the file system and import only the model for the current request but it's slightly slower and requires more configuration for build and pack. obviously I pack my functions just like I'd pack frontend React.
@Michael-sm3te
@Michael-sm3te 5 месяцев назад
​@serverlessjames, here's my handler logic for this example: // simplify invokation event const request = map_request(event); // validate request based on config // uses ajv const [is_valid, validation_errors] = validate_request( request, model_configs[request.params.model], ); if (!is_valid) { return response_error( null, "request validation error", validation_errors, ); } // execute request const execution = actions[request.params.model][request.httpMethod]; let result = await execution(request); return { statusCode: 200, headers: { "Access-Control-Allow-Origin": "*", "Content-Type": "application/json", }, body: JSON.stringify(result), }; I import all actions from all subfolders into actions upfront to simplify. This line would be your "router framework" if you will: const execution = actions[request.params.model][request.httpMethod]; I used to use the file system and import only the model for the current request but it's slightly slower and requires more configuration for build and pack. Obviously, I pack my functions just like I'd pack frontend React.
@michaelakin766
@michaelakin766 5 месяцев назад
We have done this, but I was frustrated by the 6 second cold start, so I was starting to look at Fargate before I got laid off.
@serverlessjames
@serverlessjames 5 месяцев назад
This is absolutely a valid point, and ECS + Fargate is a great way to go. And it will be needed for a whole range of use cases. I know in the video I say you'll see less cold starts, but 6s is a looooooong time.
@janivimal
@janivimal 2 месяца назад
Hi James, Great video, trying to wrap my head around since lambda are stateless how would the website manage the session in lambda , also would lambda be able to ship javascript, images etc that is embedded into page ? Also if data should be post back to the page how that should be handled ? would it (api gw) support windows authentication.
@scrawl281
@scrawl281 5 месяцев назад
Hi, James! Love your videos; I've learned a lot from them. Can you help me understand something here though: When you are doing your math at around 4:15, won't all three individual request coming in have their own lambda execution runtime (assuming no provisioned concurrency) and thus won't each one have it's own cold start, resulting in 1000ms + 50ms + 1000ms + 50ms + 1000ms + 50ms? Won't each individual request have it's own lambda and thus it's own instance of the web application (again, assuming no provisioned concurrency)? What am I missing here?
@serverlessjames
@serverlessjames 5 месяцев назад
It's a great question. In my contrived example I'm assuming each request comes in sequentially. So the 2nd doesn't start until the 1st is finished. In that scenario, then the execution environment would be re-used. If all 3 requests hit the API at the same time, then yes you would still likely see 3 cold starts. But over time, you'll definitely see less cold starts than 3 separate functions.
@scrawl281
@scrawl281 5 месяцев назад
@@serverlessjames OK, I think I understand what you're saying. So, in your example we are assuming that the time between each of the the three requests is short enough that the lambda is still "warm" and thus is reused for the next request? How long will an unused lambda stay warm? BTW, thank you for taking the time to respond!
@serverlessjames
@serverlessjames 5 месяцев назад
@scrawl281 yes exactly! Which I realise is a big assumption, but it's the easiest way to demonstrate how it can be beneficial. There's no defined SLA for how long an environment stays warm unfortunately.
@scrawl281
@scrawl281 5 месяцев назад
@@serverlessjames Perfect. That clears it up for me. Thank you!!
Далее
What's the hype with Rust and AWS Lambda?
11:23
Просмотров 1,1 тыс.
Stream, Event Bus or Queue? What's the Difference?
12:20
다리에 힘이 풀려버린 슈슈 (NG Ver.)
00:11
Просмотров 2,5 млн
skibidi toilet multiverse 039 (part 2)
08:58
Просмотров 5 млн
AWS Lambda Performance Tuning
54:13
Просмотров 127
Is HTMX a Joke??
32:15
Просмотров 18 тыс.
Battery  low 🔋 🪫
0:10
Просмотров 5 млн
Здесь упор в процессор
18:02
Просмотров 339 тыс.