Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
699 views
in Technique[技术] by (71.8m points)

multithreading - How does an asynchronous socket server work?

I should state that I'm not asking about specific implementation details (yet), but just a general overview of what's going on. I understand the basic concept behind a socket, and need clarification on the process as a whole. My (probably very wrong) understanding is currently this:

A socket is constantly listening for clients that want to connect (in its own thread). When a connection occurs, an event is raised that spawns another thread to perform the connection process. During the connection process the client is assigned it's own socket in which to communicate with the server. The server then waits for data from the client and when data arrives an event is raised which spawns a thread to read the data from a stream into a buffer.

My questions are:

How off is my understanding?

Does each client socket require it's own thread to listen for data on?

How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?

In this threaded environment, what kind of data is typically being shared, and what are the points of contention?

Any clarifications and additional explanation would be greatly appreciated.

EDIT:

Regarding the question about what data is typically shared and points of contention, I realize this is more of an implementation detail than it is a question regarding general process of accepting connections and sending/receiving data. I had looked at a couple implementations (SuperSocket and Kayak) and noticed some synchronization for things like session cache and reusable buffer pools. Feel free to ignore this question. I've appreciated all your feedback.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

One thread per connection is bad design (not scalable, overly complex) but unfortunately way too common.

A socket server works more or less like this:

  • A listening socket is setup to accept connections, and added to a socketset
  • The socket set is checked for events
  • If the listening socket has pending connections, new sockets are created by accepting the connections, and then added to the socket set
  • If a connected socket has events, the relevant IO functions are called
  • The socket set is checked for events again

This happens in one thread, you can easily handle thousands of connected sockets in a single thread, and there's few valid reasons for making this more complex by introducing threads.

while running
    select on socketset
    for each socket with events
        if socket is listener
            accept new connected socket
            add new socket to socketset
        else if socket is connection
            if event is readable
                read data
                process data
            else if event is writable
                write queued data
            else if event is closed connection
                remove socket from socketset
            end
        end
    done
done

The IP stack takes care of all the details of which packets go to what "socket" in which order. Seen from the applications point of view, a socket represents a reliable ordered byte stream (TCP) or an unreliable unordered sequence of packets(UDP)

EDIT: In response to updated question.

I don't know either of the libraries you mention, but on the concepts you mention:

  • A session cache typically keeps data associated with a client, and can reuse this data for multiple connections. This makes sense when your application logic requires state information, but it's a layer higher than the actual networking end. In the above sample, the session cache would be used by the "process data" part.
  • Buffer pools are also an easy and often effective optimization of a high-traffic server. The concept is very easy to implement, instead of allocating/deallocating space for storing data you read/write, you fetch a preallocated buffer from a pool, use it, then return it to a pool. This avoids the (sometimes relatively expensive) backend allocation/deallocation mechanisms. This is not directly related to networking, you can just as well use buffer pools for e.g. something that reads chunks of files and process them.

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...