Perhaps I don't have the appropriate history, but XML did a fine job of doing what it was supposed to do. I don't know of any other method (at the time of XML's inception) that allowed you to throw arbitrary fields into a file, describe them all, describe their relationships and reasonably convey to the user what you can do with all of it, and be human readable and be machine readable all at the same time.
Perhaps I'm an idiot and JSON's a lot older than I'm giving it credit for, but I thought XML was (and still is) a decent tool. I also don't think that it's overly verbose given how descriptive it can be.
Before XML, you either had completely plain text-files, like INIs, or even JSON today. That's great for humanly editable files, but defining complex types in such files is a bit arbitrary, and there's room for lots of ambiguity. What are the valid fields and values?
On the other hand, binary file-types usually mapped to C structs. This results in files that are near impossible to read or write if you don't have access to those structs. It was also vulnerable to encoding issues like bit-order.
XML bridged those extremes. When using XSD etc. correctly it's possible (in theory) to validate locally if a given file will be read and understood by any and all other software implementing the same standard. It removes (in theory) the ambiguity of the plain-text formats, while still resulting in an output that's (in theory) immediately readable and editable by a human with no external references.
The promise of XML is to replace the big, proprietary binary files, not to be the end-all be-all of textually encoded data. Small, simple RESTful webservices has no need for XML. Big, complicated ones might not, but once you've coded up against an advanced WSDL API, and the generated stubs just worked, complete with types and pre-wire validation, it's not hard to see the value.
I'm curious... how would you have developed a schema for S-Expressions without it ending up just as verbose? What do you lose? End tags?
http://www.agentsheets.com/lisp/XMLisp/ as an example of what XML in lisp would look like, and I can't say it's much of an improvement. Further, end tags make human debugging much easier.
There are many possibilities. SXML is the one I use.
You need to use a semi-structured editor, like paredit on emacs, to get all of the advantages, though. Otherwise as you mention you'll be grovelling for the closing paren, when the machine could have maintained the balance from the beginning.
Sexprs on their own are not enough - I think you're confusing surface syntax for the whole semantic lot. XML's namespaces, for example, let different applications mark up the XML without risk of clobbering one another. JSON isn't really comparable to XML at all - it's much weaker.
But namespaces are just more data. Namespaces, in my experience, have just turned into a hurdle that requires a bunch of needless element decoration to get it to play nice with XPath. The only place I know of where there is great confusion over what "XML" document you're processing is a browser trying to decide what type of document it's working with, i.e. all of the various versions of HTML and XHTML. That's not an argument for namespacing, that's an argument against forking markup languages. In every case where I've built XML processing services, I always knew what the purpose of the document was ahead of time.
Then there's XML Schema. In practice, I doubt few people actually validate documents against an XSD, in the rare cases one actually existed. Have you ever tried writing a reasonable sized XSD? Why are attributes treated differently from elements with a single text node as a child? Why can't I specify that a certain attribute is a prerequisite for another? Why are we specifying document structure and default values at the same time? What safety is type information actually giving me if I have to validate in my consuming service anyway?
I've had to process XML documents that claimed to be of such-and-such standard schema, and it invariably turns out untrue. There's always some 11pm job that fails to validate a doc because the author added an extra attribute or misspelled one you don't care about. Eventually, you just turn validation off and start doing things much more dynamically. I've always had to infer the schema from the document itself, otherwise my apps were too rigid for real-world use.
Someone else mentioned that S-exprs would need an intelligent editor to make writing them more coherent. I personally think the same is true for XML, as reasonably large (i.e. anything beyond tutorial demonstrations) XML documents will have the same problem of guessing what tags are already closed and which order they should be closed in, if only because their openings have scrolled off the top of the screen. Sure, you know you're closing a DIV tag right now, but which one? How long did it take to get decent XML syntax validation? While your Lisp interpreter has been sitting there quietly for the last 60 years. Throw in validation against namespacing and schema, and you still need an intelligent editor either way.
I don't really buy that XML is human readable, either. You don't read a 5000 line XML document, there's nothing to be gained from reading raw XML in that case (even if you have a raw text editor that can handle the file size). XML might have a slight advantage with being easier to demonstrate for new people learning, but once you get into actually using these tools, you don't ever use them in the "human readable/editable" format. That'd be like managing your finances with nothing but text files.
I'm just saying, Lisp was already well understood and parsed by the time XML was invented, and even XML + XSLT isn't even on parity with Lisp. Namespacing and XML Schema just turn out to be conventions that people say they're going to try to adhere to, but never do. Like I said, just like speed limits on roads.
Perhaps I'm an idiot and JSON's a lot older than I'm giving it credit for, but I thought XML was (and still is) a decent tool. I also don't think that it's overly verbose given how descriptive it can be.