👋 Hi, I’m an AWS Serverless Hero and independent consultant. I help people go faster for less using serverless technologies.
I have run production workloads at scale in AWS since 2009 and have a track record of improving businesses with technology. For instance, I led a small team that transformed a social network with serverless technologies in a matter of months. Feature delivery time went from months to days, and sometimes even hours. The system became more scalable, and reliable and was over 90% cheaper to run.
If you want to learn more about serverless and how it can accelerate your feature velocity and help you get more done faster and cheaper, then subscribe to my channel and check out my courses.
Conceptually they're different things. Events and commands are used to model your system's behaviour and their interaction, and a Lambda function is used to carry out some business logic in response to an invocation request. In your example, the invocation event might be a "command", if the event carries a message that requests some action to be performed. But Lambda functions can be invoked in response to an event too. Commands often follow a request & response pattern, but not all request & response communications are related to processing commands.
Having been in the Community Builders for 5 years now, I think it's an incredible program and something that has helped me connect with other people in the space. It's helped me make better content and get insight into features that are in the pipeline. I recommend applying to anyone who creates any kind of content around AWS
This is a preview lessons from my course, the "next lesson" it refers to is part of the course but I haven't released it on RU-vid. I've added a link to the full course in the description instead.
A problem I often face is too many limitations, which requires another service to overcome that limitation. Soon, you'll have a porridge of serverless services.
Really enjoyed this. Lots of invaluable insights into Azure Functions borne of real-world experience. Particular highlights for me were ~11:55 discussion on the pricing plans, and then at ~18:15 about the nature of scale-out and how this can affect costs; ~40:00 micro-VMs like Firecracker which I had never heard of, and then the challenges of using Azure Functions at ~45:00 including the impact of dependencies on other parts of the Azure platform. Thanks Yan and Ian for a very informative chat.
Very cool presentation. LocalStack looks neat for those who are vendor locked. My hot take is, don't get vendor locked in the first place. Choose OSS tech that you can host yourself (and run locally) or that cloud providers offer support for. If you have to use a cloud specific API, use interfaces for adaptability and build software that is oblivious to its implementation.
For a more nuanced discussion discussion about lock-in, have a watch of Gregor Hohpe's talk about the different dimensions of "lock-in" and understanding the cost of avoiding "lock-in" ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Ud9h1hJgoKk.html He also has a good article too "Don't get locked up into avoiding lock-in" if you're interested in a different perspective. martinfowler.com/articles/oss-lockin.html
I'm thinking of doing it for the Production-Ready Serverless workshop (productionreadyserverless.com/) in May, but it'd be pretty high level though. Because a) I haven't used it much myself, and b) some of the new features like IAM enforcement is only available on the paid plans (there is a 14 day free trial, but that's not enough for the length of the workshop)
Well said @BobHannet. Was an amazing project you guys put up to stream DAZN to millions over the last 7-8years :) . And was also great to be part of the side , Data family of it :)
I like Ben Kehoe's definition of serverless being a spectrum: ben11kehoe.medium.com/the-serverless-spectrum-147b02cb2292 with that in mind, Fargate is more serverless than EC2, but not as serverless as Lambda because it doesn't scale to 0 and does not offer usage-based pricing, two of the litmus tests for "fully serverless" services: www.gomomento.com/blog/fighting-off-fake-serverless-bandits-with-the-true-definition-of-serverless That's not to say that Fargate might not be a bad option for Fathom, but it's definitely not as hands-off as Lambda, which is what the Fathom team is looking for from what I gathered.
Glad you liked it. It's one of my fav episodes so far as well. There's a lot of really interesting technical challenges that goes into video streaming at scale, it's under-appreciated
@Bluescape we used step functions to build AMIs hardened for govcloud usage. The first step in the state machine was to check if AWS had published a new AMI to begin with, otherwise quit, and it was nice to see this logic execute as state transitions ... Overall we had about 20 state transitions to build an AMI. So the cost per AMI was $0.005. since our step function ran once per week in 2 environments, overall cost was $0.01 per week or 52c per year ....
@theburningmonk Can you please give some concrete examples where cold starts are more spikey and stair-stepped? Just curious for specific use cases here. Thanks!
It's not that cold starts are more spikey, but when you have sudden spikes in traffic, cold starts is more likely to be a concern. With the introduction of proactive initialization (see aaronstuyvenberg.com/posts/understanding-proactive-initialization) most cold starts will really impact users - e.g. when you do a deployment in the middle of the day, or when existing workers are replaced. But if you have a sudden spike in traffic, then you need more workers to handle the load, so Lambda would need to create new execution environments on-demand, and you'll see lots of cold starts in that moment. An example would be a social network, when a popular user posts something, suddenly everyone logs in and start interacting with the app. You can't plan for these kinds of spikes.
Greetings Yan Cui, I would like to know your opinion on when to use a container instead of a lambda, apart from when the process lasts more than 15 minutes. By the way, eagerly awaiting the launch of productionreadyserverless
When you either 1) can't use Lambda because - e.g. you need to run for more than 15 mins, or 2) it doesn't make economic sense to use Lambda in TCO terms - e.g. when you have a high throughput API (say, averaging 1000+ RPS) and you have a team that knows how to run containers at scale. The TCO bit is important, because it depends on whether you need to hire additional skillset into your organization. If you do, then that makes massive difference, and that's why Jack said in the video that, it'd only make sense for them to containerize if the cost of Lambda reaches 2-3 full-time devops engineers, because that's probably the minimum you'd need to operate a containerized environment 24/7.
Same topic, focused on AppSync: Is an HTTP resolver to a payment API, for example, capable of handling 1000 transactions per second? I purchased your AppSync course and want to implement it where I work.
Always check the quotas page for the service you want to use. They might not be exhaustive but it's a good starting point. You can do it by googling "<service name> quotas" or go to the "Service Quotas" console in your AWS console. For AppSync specifically, you're looking for this page: docs.aws.amazon.com/general/latest/gr/appsync.html In terms of throughput, you're concerned about the "Rate of request tokens", which roughly translates to 2,000 RPS but that's a soft limit and can be raised. There might be a hard limit somewhere, but it's likely much higher than the 2,000 RPS default limit.
The closest approximatation is the initDuration, which is reported in the Lambda REPORT logs, but is not a metric. You can analyze it with CloudWatch Logs Insights. There are also some other time associated with downloading the artefact, and allocating a space in the fleet. But those are outside of your control, so it's best to focus on the initDuration, which is how long does it take to initialize the execution environment on a worker instance.
I don't see that'd make sense for a few reasons: 1. you'd probably lose any application state, because those need to run as separate processes. 2. to ship both would also significantly impact your bundle size and therefore cold start performance 3. if V8 only handles some executions then you'd lose the value of JIT - one of the arguments for not shipping a JIT compiler with LLRT is that Lambda execution environments are short-lived, having both would cut the usefulness of JIT even more 4. if you'd need to initialize two runtimes, then the cold start time would be the sum of the two
A lot of nice insights here! Besides the thought process on how Jack considers the “total cost of ownership”, I also appreciated how he grew his network by blogging and by being nice and meeting new people. Also can relate to the part where he still enjoys coding like I do. I also don’t want to manage infrastructure and people, and keeping the team lean works well for some, and having agency and freedom to do what’s best and being aligned with your peers.