Tim's a little bothered by WS-Addressing introducing instances of stateful services into Web services, and correctly asks what the difference between stateful Web services and distributed objects are. IMO, the real answer is both not much and yet enough.
Here are a few major technical factors to consider when evaluating Web services versus distributed objects:
3) Re-usable verbs,
4) Remote invocation style.
There are also political factors, namely who's at the table.
Web services, by the use of XML and extensibility mechanisms, can be more loosely coupled than distributed objects because of the evolvability of the interface. HTML is the poster child for this decentralized ability. XML with namespaces gives a lot of potential for evolvability, but we've half-blown it with Web services because of the difficulty of extensibility - particularly lack of must ignore rules, the Schema UPA rule, and the lack of a default extensibility model in Schema. We probably have enough extensibility to make Web services != distributed object extensibility, but Web service implementations by and large are as brittle as distributed objects. :-( And we're still doing it, because the WSDL 2.0 group won't commit to a versioning story for services - the techy issue is that WSDL 2.0 doesn't provide a service with a way of indicating whether a revision is compatible or incompatible with the earlier service.
There are 2 Webs that are out there wrt state:
1) Web resources that have a URI that are stateless and work with HTTP GET - this called "on the web";
2) Web resources that require some state or data - through HTTP Cookies or POST data - that are not "on the web". Not a lot of people understand that an HTML FORM POST result is not "on the web" because there's no URI for the result. But that's ok!
They both work and scale just fine. Let me say that again: Stateful services scale fine. Tim couples stateful with pinning to a server, "If a service implementation requires pinning to a particular server to work, then it isn't going to work in a real enterprise environment." Firstly, pinning to a particular server actual does work in an enterprise environment. Seriously. Secondly, there are lots of ways to migrate state from one node to another without the client knowing. BTW, this is one reason why BEA has lobbied hard for mutable EPRs. Point being, stateful services can scale just fine.
There is a difference between stateful and stateless when it comes to system properties other than scalability. They do have dramatically different properties for intermediaries as the "on the web" resources can be secured, cached, inspected, etc. more efficiently. I think it's pretty safe to say that the cost of doing security on the web, with SSL + HTTP Authentication or Cookies, is simpler than security of web services, with WS-Security + WS-Trust + WS-SecureConversation + WS-Policy + WS-SecurityPolicy + WS-I Basic Security Profile + ? WS-I Trust Profile? The property that Web services give you is way more flexibility, but it comes at a significant cost. Stateful services can hinder some of the properties other than scalability, but that seems to be ok in the scenarios that the flexibility calls for.
The reality is that there's a big chunk of the web that is not "on the web" and is stateful, and so we can't just say the web is about stateless and any introduction of stateful web services means breaking the web. The horse left the barn going after the HTTP cookie.
The really great stuff on the web happens with 1 verb: GET. POST is basically a no-op, and PUT and DELETE usage is almost non-existant. That one verb does all the magic that we think of for the Web. But Web services can't really use that verb. I think that Web services really blew it by not providing a default SOAP binding to HTTP GET. Nobody uses the soap-response (which is HTTP GET + SOAP response) because it takes SOAP people out of the SOAP data model on the sending side. The worlds of SOAP and Web are pretty much separate because SOAP can't really use the Web resources.
It also turns out that we don't have any reasonable way of bringing the XML world into the URI world (I've bashed at this problem for sooo long it's painful), so the re-usable HTTP GET verb doesn't really "see" xml in nearly the same way that SOAP does. Maybe we'll be able to get the benefits of HTTP GET by the adoption of WS-Transfer (or my proposed WS-GET subset) but most Web services folks just go on making new verbs for read and write operations without thinking about re-usable GET. And besides, I don't hear the thunder of WS-Transfer GET intermediaries.
We do remember that distributed objects were all about custom operations right?
Speaking of RPC, one of the "knocks" on distributed objects is that they are RPCs. This is allegedly bad because the client and server are coupled. When I say RPC, I mean that a client makes a synchronous and custom method invoke on a remote machine. I don't mean that SOAP RPC or encoding are used, those aren't really the issue. Yet Web services are almost all RPC style invokes. It doesn't matter if Doc/literal is used if the soap body contains a custom method and a synchronous invoke is done. The Web is fairly RPC-ish too, it's just a standardized set of verbs and some other constraints. An HTTP GET on uri foo.com/whatever is certainly a synchronous remote method invoke.
But people have forgotten what I think is the big reason that RPC got into trouble, which was the "R" part. Distributed objects tried to make the remote procedure call look as if it was local. The idea is that you take a local service and just "remote" it. And this breaks because you have to know about the network, particularly latency and reliability. The Web came along and showed that the application MUST know that it is make a remote invoke and the network can't be abstracted away from the application.
Ironically, most of our tools do the same kind of thing that we did with RPC, by autogenerating SOAP/WSDL wrappers for services. Even more ironically, we decided that "RPC/Encoded" was bad for interop, that we should move to Doc/Literal, and yet customers are screaming about the lack of interop of Schema. Ooosh.
In spite of that, we do seem to be finally moving towards an interface centric design philosophy. The WSDL, Schema, SOAP, etc. are front and centre in people's minds. And this is a far better place to be than we were with distributed objects.
Another aspect of RPC is the synchronicity. Web services are finally getting standards around Asynchrony, particularly WS-Addressing. For the most part, Web services are synchronous interactions. Effectively, Web services is remote method invokes but with knowledge of the remoteness.
Where are we?
I've shown that some pretty important technical facets - extensibility, versioning, state, verb re-usability, sychronicity - where Web services aren't that different from distributed objects. There is an issue that the bulk of Web services can't take advantage of Web infrastructure. Sure, Web services uses XML with Namespaces and that buys a lot for interoperability. The knowledge of the network is an important differentiator between Web services and distributed objects.
The challenge for anybody to prove that Web services = or != distributed objects is to quantify the differences or similarities in actual architecture terms - like identity, state, lifecycle, verbs, synch/asynch, message exchange patterns. Show how Web services are more or less brittle than distributed object technology at a technical level. Not just "Oh, Web services are SOA and distributed objects are objects and we all know services are better than objects." That's yucky thinking
Web services are pretty close to distributed objects at a technical level but Web services != distributed objects at political level because we roughly have all the big vendors working together. It would be nice if the distributed object folks wanted to try some new approaches (hey, URIs!) but we'll get Web services to work technically and politically because the technical differences are the important ones (remote knowledge) and the politics are better.