Your problem doesn't exist for System.Net.Http.HttpClient
, try it instead. It can reuse the existing connections (no DNS cache needed for such calls). Looks like that is exactly what you want to achieve. As a bonus it supports HTTP/2 (can be enabled with Property assignment at HttpClient
instance creation).
WebRequest
is ancient and not recommentded by Microsoft for new development. In .NET 5 HttpClient
is rather faster (twice?).
Create the HttpClient
instance once per application (link).
private static readonly HttpClient client = new HttpClient();
Analog of your request. Note await
is available only in methods marked as async
.
string text = await client.GetStringAsync("https://www.somehost.com/resources/b.txt");
You may also do multiple requests at once without spawning concurrent Threads.
string[] urls = new string[]
{
"https://www.somehost.com/resources/a.txt",
"https://www.somehost.com/resources/b.txt"
};
List<Task<string>> tasks = new List<Task<string>>();
foreach (string url in urls)
{
tasks.Add(client.GetStringAsync(url));
}
string[] results = await Task.WhenAll(tasks);
If you're not familiar with Asynchronous programming e.g. async/await
, start with this article.
Also you can set a limit how many requests will be processed at once. Let's do the same request 1000 times with limit to 10 requests at once.
static async Task Main(string[] args)
{
Stopwatch sw = new StopWatch();
string url = "https://www.somehost.com/resources/a.txt";
using SemaphoreSlim semaphore = new SemaphoreSlim(10);
List<Task<string>> tasks = new List<Task<string>>();
sw.Start();
for (int i = 0; i < 1000; i++)
{
await semaphore.WaitAsync();
tasks.Add(GetPageAsync(url, semaphore));
}
string[] results = await Task.WhenAll(tasks);
sw.Stop();
Console.WriteLine($"Elapsed: {sw.Elapsemilliseconds}ms");
}
private static async Task GetPageAsync(string url, SemaphoreSlim semaphore)
{
try
{
return await client.GetStringAsync(url);
}
finally
{
semaphore.Release();
}
}
You may measure the time.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…