what happens if the server is created with multiple processes (for multiple cores) with uvicorn main:app --workers 4. The first 4 clients are given a new process? or it can be either a process or another thread if a thread is in a non-blocking task? and what happens when there are more than 4 clients at the same time in this case? the new clients are assigned to threads randomly across the 4 processes?
This is very valuable content. New job, starting to work on FastAPI. The whole python paradigm is new for me. This video helped clear one of the most critical concept for me. Thanks brother.
Hello, great video. I am super new to python. And started using very recently for backend where its being used along with FastAPI. Can you please tell me what are some concepts/ topics which I should be aware for developing efficient code. For eg from this video, I learnt that which function runs concurrently and which doesn't. I had no idea tbh about this before. If you could spare few minutes and share some stuff which I should know it would be great.
I have so many questions, I am so sorry. Should you pass app to the AsyncClient() as well for the ASGI server? Also, how would the last example affect APIRouters defined in separate files? How would they reach app.client? They could have their separate lifespan function but wouldn't it be prefered to use the same object? Also, I don't know, but wouldn't it be better to use app.state.client? Oh, and what about POST requests, would you still use the Request object to reach the client? It works, but it gets messy code wise.
Well issue with this kind of libraries imho is, that real life code is not a single module separated from everything else. It's much bigger than that and has a ton of dependencies and/or setup and tear down required. Some tests require quite a lot of fixtures and some of those fixtures are configurable as well. Conclusion is...I'd have to change manually great majority of that auto generated tests to make them any good, so I might as well right them on my own from scratch. And it doesn't support python 3.11+, so it's a big issue.
this is not about FastApi, its about sync functions and async functions so just dont use sync functions in async functions if sync one is takes a lot of time.
My question would be how FastAPI then manages workload when it´s handed over to the worker thread. Because I can only see one worker thread running, at the same time it handles 40 'workloads' concurrently.
I believe you are testing APIs in the browser. Sometimes, browsers like Chrome have limitations on making parallel requests to the same URL. In the video, if you look closely, I am using two different browsers to hit the same API in parallel. You can try the same approach.
I need some help, I want to create a fast api endpoint that calls a synchronous function that has a lot of blocking I/0 operations. But I want the endpoint function to run asynchronously so it can accept many requests at the same time. How should I do this, is there an alternative approach?
The only way to achieve that is to use multi-threading which I advice against.... instead, make the function asynchronous and try to find the non-blocking function for what you want to do...
Better still, use the run_in_threadpool function from fastapi to run the process in a different thread so that you don't block the event...better than implementing multi threading on your own.
Using `return condition` directly is more efficient when you only need to return the boolean value itself. However, the purpose of my comparison was to explore different styles of conditional statements and their performance, especially in scenarios where the if-else structure is necessary for more complex logic.
As per LEGB rule , it is considered as enclosed variable - neither local or global but behaves as local and global within a function more specifically under decorators
Hello I am not able to run Pynguin. I tryed your approach and got this error: (base) seva@air-von-seva Pynguin_input % pynguin --project-path . --module-name test_example_2 --output-path . Traceback (most recent call last): File "/Users/seva/anaconda3/bin/pynguin", line 5, in <module> from pynguin.cli import main File "/Users/seva/anaconda3/lib/python3.11/site-packages/pynguin/__init__.py", line 9, in <module> import pynguin.generator as gen File "/Users/seva/anaconda3/lib/python3.11/site-packages/pynguin/generator.py", line 31, in <module> import pynguin.analyses.seeding as seeding # pylint: disable=consider-using-from-import ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/seva/anaconda3/lib/python3.11/site-packages/pynguin/analyses/seeding.py", line 25, in <module> import pynguin.ga.testcasechromosome as tcc File "/Users/seva/anaconda3/lib/python3.11/site-packages/pynguin/ga/testcasechromosome.py", line 13, in <module> import pynguin.ga.chromosome as chrom File "/Users/seva/anaconda3/lib/python3.11/site-packages/pynguin/ga/chromosome.py", line 13, in <module> import pynguin.ga.computations as ff File "/Users/seva/anaconda3/lib/python3.11/site-packages/pynguin/ga/computations.py", line 17, in <module> from pynguin.testcase.execution import ExecutionTrace File "/Users/seva/anaconda3/lib/python3.11/site-packages/pynguin/testcase/execution.py", line 425, in <module> class ExecutionTracer: File "/Users/seva/anaconda3/lib/python3.11/site-packages/pynguin/testcase/execution.py", line 460, in ExecutionTracer Compare.IN: lambda val1, val2: ( ^^^^^^^^^^ File "/Users/seva/anaconda3/lib/python3.11/enum.py", line 784, in __getattr__ raise AttributeError(name) from None AttributeError: IN