Isn't the implementation dependence exactly the point? IDL (CORBA IDL at least) does not stand alone, it's accompanied by the appropriate programming language mappings. And the OO variants (C++ and Java, and maybe others) invariably define that the implementation is handled by an object. That's why I think Jim has a point here.
As to the lower entry barrier, I believe this is not because of a conceptual difference, but because tools in general have become much better. If Systinet were in the CORBA business, I doubt a "CORBA Developer" plugin for Eclipse would be any harder to use or a server harder to deploy than the WASP versions.
]]>I see the lower entry barrier mainly in the use of XML and in the readability of the core specs (stop hitting me with a printout of WSDL 2, please!), not in the tools. Many more people can create web services than CORBA services without the need to get a vendor's package.
]]>I agree that if the toolkit is taken out of the equation, you are right.
]]>I'd say that the implementation of the operation isn't the important part. What's important is the shared expectation between client and server about what the request and therefore the response, means. With IDL, and as I'm hearing is still the case with WSDL, there is such a shared expectation. Jim, Savas, and myself argue that this is suboptimal (to be polite 8-).
]]>I believe the shared expectation is always there, even if the web defers it to the human user and only leaves a very vague meaning for the methods. Wherever we want to automate some processes, the parties (automata) need to "know what to invoke and why", I'd say. That's where IDL is necessary.
Again, for human-oriented applications and for apps where the necessary semantics is limited (e.g. only cacheability information is necessary), the meaning communicated by the IDL will be limited, e.g. the operation POST adds the representation as a subordinate resource.
I still want to be able to create my interface in which the operation add adds two integers, for I still believe it's useful where intelligence isn't present.
]]>Consider that what the Web encourages, even for machine to machine interaction, is for parties to put their expectations in essentially two forms; that you'll process a document at my request (POST), and that you'll give me a document when I ask for one (GET). It works, in practice, like this;
Agent A;
- listens via Rendezvous ...
- for some RDF which declares ...
- that there exists a resource of type CertainKindOfDataConsumer
- so it sends some to that resource via HTTP POST
while Agent B;
- comes on line
- takes inventory of its own capabilities
- announces those capabilities via RDF/Rendezvous
- one of those capabilities is a data processor of type CertainKindOfDataConsumer
Agent A and Agent B share no knowledge of one another, and in order to communicate, only need to know HTTP, URIs, RDF, Rendezvous, TCP/IP, and what a "CertainKindOfDataConsumer" is. So the only "shared expectation" is that provided by those standards, specifically where HTTP says that POST is a way for the client to request that the server process the data. Agent A may not know or care what Agent B will do with the data; all it needs to know is that B accepts the data for processing, which is what a successful response to an HTTP POST tells it.
]]>Taking the dependency - the shared expectation - out of the networking, communication, middleware layer (whatever you want to call it), just moves it up to another layer, IMO. It doesn't remove it altogether.
]]>Application protocols embody the notion of shared expectation, unlike transport protocols. Hence my focus on the transport vs. transfer and protocol-independence issues for the past few years.
]]>