I can imagine that any XML-based syntax (e.g. DocBook) would be best for that.
Yes! I also feel that's the most mature and stable format, most suitable for long-term archiving of longer works which have had work put into "structuring" them. However it's not a good format for normal humans to actually work in. I personally think AsciiDoc is the way to go for that, readable diffs therefore practical to store in a VCS.
I cannot imagine that a "simple" syntax (like txt2tags) would be good because it wouldn't support many extra features some syntaxes might have. A simple syntax is good for users to learn and use, but it's not good for "re-translating" one syntax to the other.
But each piece is just one part of the toolchain, there isn't going to be "one tool to rule them all" to handle all your needs, which of course will be different from mine. The point is selecting tools that enable interoperability, allow us to create automated and integrated knowledge managment toolchains.
Regarding this particular issue, the key for me is to define an information architecture taxonomy and use the right syntax for the job. A fundamental distinction for me is between the 99% of content that is relatively unstructured "chunks" and "snippets" of useful reference information that need to be readily accessible. The next level up is "article", composed of structured "sections" and "subsections", with if substantial a TOC, perhaps and index and some cross-referencing, footnotes etc. "Books", which can be "volumes" in a "set" and can have "parts", almost certainly have "chapters", which are then broken down into sections as with articles above.
At the "chunk/snippet" level through to "section" and "chapters", the simpler syntax allows for all the inline formatting required.
It's only when a given writer or group wants to go to the trouble of assembling these into larger more formal works that the higher-level structuring, indexing etc needs to be added. My current thinking is the easier low-level syntax (txt2tags) for the 99% of data that remains at the chunk/snippet level, and convert "up" to AsciiDoc for the works in the process of being structured more formally perhaps for "publication" whatever that might mean in a given context.
Obviously any decent programming editor, usually vim or emacs (org mode!), can also be programmed to process such content - and a fundamental goal of mine is to always keep the data, even while it's "in process" (which it always is) open and accessible to these basic tools.
However there are two tools I'm currently investigating that add more value out of the box. One is the python outliner/programming editor Leo (search Leo-editor), and the other is DokuWiki. The former has great potential due to its flexible data model in handling multiple "views" of content via "cloned subtrees" and its data model's integration with scripting (I wish I were a programmer!). Coming from Python, it's native syntax for markup is reST/Sphinx, which is relatively "open" for transformation via the Pandocs project.
However it's basically a tool for individuals rather than groups, and that's where DokuWiki comes in, enabled by its "transparent container" data model. I can of course keep txt2tags or reST or markdown or whatever syntax files in DW, but when seeking collaboration from the larger group, many of whom are non-technical, it would of course be better if they could just deal with the normal UI rather than having to learn the syntax.
Sorry to go on, but I'm hoping this conversation will allow from continued cross-fertilization of ideas. . .