Aws Lambda
Table of Content
Click to expand
Lambda
Lambda can be deployed using .zip archive function container image with OCI or docker container image
๐ We can use reserved concurrency to limit the number of maximum concurrent invocations for your function.
Functionโs Maximum request rate per second (RPS)
10 * maximum reserved concurrency
Reserved concurrency = 0 means, function is deactivated
Can find paterns here.
- Separate business logic from handler method
- Single purpose function. Rather use 3 seperate functions instead of 3 things in one function
- Reduce dependency size ~ dynamoDB instead of aws-sdk
- Each function must be stateless, store state in DDB or S3, EFS
IAM resource policy: Think what has access to trigger.
IAM execution role: Controls what the function can do like cloud watch logs, crud, etc
SAM only allows permissions of resources mentioned in the template.
- Pay as you go, Pay for value
- Based on number of requests / number of invokes
- Duration: Invocation time x allocated memory (GB) - 120 Meg to 10GB
- Increase in mem size ~ Increase in cpu used
- x86 or x64 โ 34% better price performance using Gravitonx64
Internet access when attached to a VPC
By default, Lambda functions have access to the public internet. When You attach your function to a VPC, it can only access resources available within that VPC. If you want internet access in VPC enable internet access that for your VPC
Lambda Scaling
Functionโs Maximum request rate per second (RPS)
10 * maximum reserved concurrency
Understanding ad visualizing concurrency
Lambda invokes your function in a secure an isolated environment. Has 2 phases
- Init
- Invoke
Reusing init environment in consequent environment for concurrency of 1
Reusing init environment in consequent environment for concurrency of 10
- Request 1: Provisions new environment A
- Reasoning: This is the first request; no execution environment instances are available.
- Request 2: Provisions new environment B
- Reasoning: Existing execution environment instance A is busy.
- Request 3: Provisions new environment C
- Reasoning: Existing execution environment instances A and B are both busy.
- Request 4: Provisions new environment D
- Reasoning: Existing execution environment instances A, B, and C are all busy.
- Request 5: Provisions new environment E
- Reasoning: Existing execution environment instances A, B, C, and D are all busy.
- Request 6: Reuses environment A
- Reasoning: Execution environment instance A has finished processing request 1 and is now available.
- Request 7: Reuses environment B
- Reasoning: Execution environment instance B has finished processing request 2 and is now available.
- Request 8: Reuses environment C
- Reasoning: Execution environment instance C has finished processing request 3 and is now available.
- Request 9: Provisions new environment F
- Reasoning: Existing execution environment instances A, B, C, D, and E are all busy.
- Request 10: Reuses environment D
- Reasoning: Execution environment instance D has finished processing request 4 and is now available.
Calculating concurrency for a function
Concurrency = (average requests per second) * (average request duration in seconds)
Suppose you have a function that takes, on average, 200 ms to run. During peak load, you observe 5,000 requests per second. What is the concurrency of your function during peak load?
Concurrency = (5,000 requests/second) * (0.2 seconds/request) = 1,000
Alternatively, an average function duration of 200 ms means that your function can process 5 requests per second. To handle the 5,000 request per second workload, you need 1,000 execution environment instances. Thus, the concurrency is 1,000:
Concurrency = (5,000 requests/second) / (5 requests/second) = 1,000
Because an execution environment can only handle 10 RPS
Concurrency = (200 requests/second) * (0.05 second/request) = 10
But the an execution environment can only handle 10 * concurrent meaning that, it can only handle 100 RPS. Throttling will occur, so we will need to manage 20 concurrency value to manage to handle the load of 200 RPS
Concurrency control in Lambda
There are two types of concurrency control in lambda
Reserved Concurrency control:
- Max number of concurrent execution unit that you want to allocate
- No extra charge
- No other functions can use that concurrency
t4 - t5: unreserved functions are throttling ( 200+ / 200 ) independent to others
t3-t4: orange is throttling (400+ / 400) independent to others
Canโt cap out of control
Controlled scaling
May not be able to fully utilize system concurrency
Provisioned Concurrency control
- Done to mitigate cold start latency problems
- Will respond immediately to a request
- t1, immediate invocation due to provisioned concurrency.
- t2, limit reaches 400
- now uses unreserved concurrency requests
- Lambda uses unreserved
- creates executions environment and cold start latencies are experiences
- t3, returns to provisioned capacity
- t4, burst in traffic, uses unreserved concurrency
- t5, full account-region limit is reached and now throttling is experienced
The diagram below is similar and must be done like this in production to limit unprovisoned function to not die.
- Add aliases. Latest is $LATEST
- Alias is a function version pointer
For the time being, comments are managed by Disqus, a third-party library. I will eventually replace it with another solution, but the timeline is unclear. Considering the amount of data being loaded, if you would like to view comments or post a comment, click on the button below. For more information about why you see this button, take a look at the following article.