Unerwartetes Verhalten von Async/Warte in create_taskPython

Python-Programme
Guest
 Unerwartetes Verhalten von Async/Warte in create_task

Post by Guest »

Ich habe einen Produktionscode in meiner Anwendung debuggiert,
Ich habe den tatsächlichen Code durch Dummy -Code unten ersetzt, aber die FunktionLity bleibt gleich < /p>

Code: Select all

import asyncio

class RequestHandler:
def __init__(self, max_concurrent_requests):
# Semaphore to limit the number of concurrent requests
self.semaphore = asyncio.Semaphore(max_concurrent_requests)

async def another_async_function(self, url):
# This function will perform the actual async task, like a network request
print(f"Start task for {url}")
await asyncio.sleep(5)  # Simulate network delay
print(f"End task for {url}")
return url

async def fake_request(self, url):
# Using async with the semaphore to limit concurrency
async with self.semaphore:
# Instead of directly using await, call and return the result of another async function
return self.another_async_function(url)

async def next(self, urls):
tasks = []
# Create and schedule tasks for each URL
for url in urls:
print(f"Scheduling task for {url}...")
task = asyncio.create_task(await self.fake_request(url))
tasks.append(task)

# Yield the results as tasks complete
for task in tasks:
result = await task  # Wait for task to complete and get result
yield result  # Yield result as each task completes

async def main(self):
urls = ["url1", "url2", "url3", "url4", "url5", "url6", "url7"]

# Get the async generator from next() and process each result as it's yielded
async for result in self.next(urls):
print(f"Processed result: {result}")

# Running the main function
request_handler = RequestHandler(max_concurrent_requests=2)  # Limit concurrency to 2
asyncio.run(request_handler.main())
< /code>
Ich glaube, dass die Verwendung von Warte in create_task falsch
isttask = asyncio.create_task(await self.fake_request(url))

Und ich hatte erwartet, dass ein sequentielles Ergebnis aus diesem Grund zeigt,
, aber ich bekomme eine parallele Ausführung für alle URLs, wie wie Ist das möglich, sollte der für jede URL -Verarbeitung nicht den für die Schleifen im nächsten Methodenblock blockieren, da ein Warten in create_task?
Scheduling task for url1...
Scheduling task for url2...
Scheduling task for url3...
Scheduling task for url4...
Scheduling task for url5...
Scheduling task for url6...
Scheduling task for url7...
Start task for url1
Start task for url2
Start task for url3
Start task for url4
Start task for url5
Start task for url6
Start task for url7
End task for url1
End task for url2
End task for url3
End task for url4
End task for url5
End task for url6
End task for url7
Processed result: url1
Processed result: url2
Processed result: url3
Processed result: url4
Processed result: url5
Processed result: url6
Processed result: url7

...Program finished with exit code 0
Press ENTER to exit console.
< /code>
Was fehlt mir hier? Wie funktioniert dieser Code? Ausführung. werden in gleichzeitiger Ausführung gedruckt, während die Sempahore -Grenze 2

Quick Reply

Change Text Case: 
   
  • Similar Topics
    Replies
    Views
    Last post