Distributed applications usually use some kind of message passing mechanism for communication (or remote procedure calls - a method I do not prefer). One way to use message passing with Qt (and the way I'm going to use here) is creating some kind of sockets and pump data through it by using QDataStream and objects offering io-operators. Of course one has to handle splitted messages, all this boring setup stuff...
What I think of is just describing your messages in very simple format:
message msg1 { int i; QString s; } message msg2 { double f; msg1 msg; }Or more generally:
message <msg_name> { <field_type> <field_name> }The field_type may be any type which is read- and writable by a QDataStream. After creating the message definitions the mpq.py reads all your idl files and creates the necessary Qt implementation from it. There should be the network handling stuff: connect to remote or accept connections, reuse existing connections..., there should be the signal handling stuff: raising a signal whenever a message arrives, offering slots for sending messages.
Other ideas are adding encryption and compression, debugging, 1:1 and 1:n relations between client and server, ...
For a sample implementation look at the test code generated from the -t switch of mpq.py.
At this stage I am thinking about some kind of a remote model: the client creates a model, describing which data from which tables it needs. In format which can both be easily transformed into sql and is understandable by the server. The server is able to check the query (applying access rights or some business logic), fetch the data and return it to the client. Our RemoteModel works like a QSqlTableModel from the clients point of view, but implements all the 3-tier stuff.