Showing posts with label xml. Show all posts
Showing posts with label xml. Show all posts

Monday, 18 July 2011

HTTP or it doesn't exist.

This is my 'or it doesn't exist' blog post[1][2][3]. I think everyone should have one ;-)

A big chunk of my life is processing electronic information. Since I would like it to be a (slightly) smaller chunk of my life, I want to automate as much as possible. Now ideally, I don't want a massive disconnect between what I have to do as a human processor of information and what I need to tell a computer to do to do that job done without my help. Because it's easier that way.

So when I hear that the information I need to process is in some spreadsheet or other on a Windows share, it makes me a little sad. When I hear that it is available via a sensible REST interface in a sensible format, my heart leaps for joy just a little.

With something like Python's standard library (and third-party package) support for HTTP (requests), XML (ElementTree) and JSON, I should be able to get my computer to do most of the manual data processing tasks which involve 'documents' of some form or other.

In a previous job I worked at convincing anyone who would listen that 'XML over HTTP' was the best thing since sliced bread. With appropriate XSLT and CSS links, the same data source (i.e. URI) could be happily consumed by both man and machine. Admittedly most of the information was highly structured data - wire protocols and the like, but it still needed to be understandable by real people and real programs.

I'm not an XML expert, but I think I 'get' it. I never understood why it needed so much baggage though, and can't say I'm sad that the whole web services thing seems to be quietly drifting into the background - though maybe it was always trying to.

A lot changes in web technology in a short time, and XML is no longer 'cool', so I won't be quite as passionate about 'XML over HTTP' as I once was. For short fragments it is far more verbose than JSON, though I'd argue that for longer documents, XML's added expressiveness makes the verbosity worth it. Maybe it was ever thus, but whenever two technologies have even the slightest overlap, there seems to be a territorial defensiveness which makes the thought of using both in one project seem somewhat radical. So while I've used JSON much more than XML in the last couple of years, I've not turned against it. If done right (Apple, what were you thinking with plist files!?) - it is great. Compared to JSON-like representations, the ability to have attributes for every node in the tree is a whole new dimension in making a data source more awesome or usable (or terribly broken and rubbish). I've seen too many XML documents where either everything is an attribute or nothing is, but it's not exactly rocket science.

Things I liked about XML:

Simplicity
I like to think I could write a parser for XML 1.0 without too much effort. If it's not well formed, stop. Except for trivial whitespace normalisation etc, there is a one-to-one mapping of structure to serialisation. Compare that with the mess of HTML parsers. While HTML5 might now specify how errored documents should be parsed (i.e. what the resulting DOM should be), I suspect that a HTML5 -> DOM parser is a far more complex beast.
Names! Sensible Names!
Because HTML is limited in its domain, it has a fixed (though growing thanks to the living standard[4] which is HTML) set of tags. When another domain is imposed on top of that, the class attribute tends to get pressed into service in a ugly and overloaded way. By allowing top-level tags to be domain-specific, we can make the document abstraction more 'square'[5].
Attributes
Attributes allow metadata to be attached to document nodes. Just as a lower-level language is fully capable of creating a solution to any given problem, having 'zero mental cost' abstractions (such as the data structures provided by high-level languages) enables new ways of thinking about problems. In the same way, having attributes on data nodes doesn't give us anything we couldn't implement without them, but it provides another abstraction which I've found invaluable and missed when using or creating JSON data sources.

What does make me slightly(!) sad though is the practical demise of XHTML and any priority that browsers might give to processing XML. There is now a many-to-one mapping of markup to DOM, and pre HTML5 (and still in practice for the foreseeable future considering browser idiosyncrasies and bugs) - a many-to-many mapping. It wouldn't surprise me if XSLT transform support eventually disappeared from browsers.

Maybe there's a bit of elitism here - if you can't code well-formed markup and some decent XSLT (preferably with lots of convoluted functional programming thrown in) - then frankly 'get of my lawn!'. I love the new features in HTML(5), but part of me wishes that there was an implied background 'X' unquestionably preceding that, for all things. The success of the web is that it broke out of that mould. But in doing that it has compromised the formalisms which machines demand and require. Is the dream of the machine-readable semantic web getting further away - even as cool and accessible (and standards compliant - at last) web content finally looks like it might possibly start to achieve its goal? Is it too much (and too late) to dream of 'data' (rather than ramblings like this one) being available in the same form for both the human viewer and the computer automaton?

I'm prepared to be realistic and accept where we've come to. It's not all bad, and the speed with which technology is changing has never been faster. It's an exciting time to wield electronic information, and we've got the tools to move forward from inaccessible files stored on closed, disconnected systems. So where I used to say 'XML over HTTP', my new mantra shall now be 'HTTP or it doesn't exist'. At least for a while.

Wednesday, 6 April 2011

xmlcmd: adding an --xml option, one command at a time

In my last post, I wrote some thoughts on how using the text based meta-language[1] of regular expressions to filter and manipulate structured data from UNIX commands was not fully exploiting the jigsaw-puzzle approach of 'the unix philosophy'[2], and that XPath (and by implication XML) provided an alternative where structured data on the command line was concerned.[3]

I also mentioned how great things could be if, like subversion, every POSIX command line tool had an --xml switch which could output XML. (There are many programs with XML output, but the main POSIX ones[4] don't have this as an option)

Here's one I made earlier

I was always aware of the danger of overstating the case, but sometimes that can be helpful. Or at least fun. And I'd already started prototyping something which looked fun, dangerous, and potentially useful. This is intended to be illustrative rather than a serious suggestion, but there might be many other cases where the concepts can be used more seriously.

1. Add a path

There isn't any magic to what we're doing in adding --xml options, and we're not touching the original programs. We're just using the fact that the PATH in POSIX operating systems contains an ordered list of entries, and we're simply inserting a 'hook' early on in the path which can catch and redirect certain formats of command, while transparently forwarding others.

I tend to have a ~/bin directory on my path anyway (keeping good care that it is only writable by myself) - so I'm already set, but if not, you'll need a directory which appears first on the PATH.

ben$ mkdir -p ~/bin
add that path to the start of your login path (e.g. in .bashrc or .bash_profile):
export PATH=$HOME/bin:$PATH
Once that is done, anything in that directory will be run in preference to anything else. Put an 'ls' file in there something like the following:
#!/usr/bin/env python
print("These are not the files you're looking for")
make it executable (chmod +x ~/bin/ls) and you won't be able to run 'ls' anymore. Except you are running it, it's just a different ls, and not doing anything particularly helpful. You can always run the original ls with a fully specified path (or try using $(whereis ls)).

Two more things make this potentially useful:

  • Finding the next program on the PATH, which would have been run if something else hadn't sneaked in first
  • Selectively running either this 'original' program or some different code based on relevant criteria (e.g. existence of --xml in the command line options)
and the following makes things practical:
  • Making the two things above easily reusable for any command.

2. The magic of args[0]

Most of the time most programs ignore args[0] - the program's own name. But what if args[0] could be treated as a command line option, just like all the others? What makes this possible is having multiple symbolic links to a single program. args[0] is then the name of the original symlink by which the process was called, so although the same program is ultimately running, it can determine in what way it was called. It can therefore change its own operation. This technique is used in the busybox project to implement a generous number of commands in a single executable.

3. Introducing xmlcmd

xmlcmd is a Python package which supports this terrible corruption of POSIX as it should always be. The main xmlcmd module code is fairly straightforward, and is shown below. This finds the original program (which would have been run if we weren't first on the path), and then either execs that (if no --xml option is provided), or runs some Python code in an dynamically imported Python module (_xyz from the xmlcmd package, where xyz is the 'original' command name) if --xml is present.

#!/usr/bin/python
"""
xmlcmd.py - support for adding an --xml option to various commands

Ben Bass 2011. (Public Domain)
"""

import sys
import os
import which  # from PyPI 'which' package

def process_cmd(cmd_name, args, orig_cmd_path):
    """
    import and call the main() function from the module
    xmlcmd._{cmd}
    """
    module = __import__('xmlcmd._%s' % cmd_name,
                        fromlist=['_%s' % cmd_name])
    raise SystemExit(module.main(args, orig_cmd_path))

def main(args=None):
    """
    run system command from sys.argv[:], where sys.argv[0]
    implies the real command to run (e.g. via symlinks to us)
    """
    if args is None:
        args = sys.argv

    # args[0] will be a full path - we only want the command name
    cmd_name = os.path.basename(args[0])
    if cmd_name.startswith('xmlcmd'):
        raise SystemExit('xmlcmd should not be called directly')

    # get the command which would have run if we hadn't sneaked
    # ahead of it in the $PATH
    cmd_path_gen = which.whichgen(cmd_name)
    cmd_path_gen.next()   # skip first match (us)
    orig_cmd_path = cmd_path_gen.next()

    if '--xml' in args:
        args.remove('--xml')
        # forward to our xmlized version...
        process_cmd(cmd_name, args, orig_cmd_path)
    else:
        # execv *replaces* this process, so it has no idea it
        # wasn't called directly. Total transparency.
        os.execv(orig_cmd_path, args)

if __name__ == '__main__':
    main()

4. The implementations

The real work is all handled in the _{cmd} modules of course, so admittedly we've really only moved the problem around a bit. But the point of this exercise is about the ease with which we can add these new entry points into existing systems. Nothing slows down in any noticeable way, and it would be easy to extend an entire class of commands, one at a time, by nothing more than adding a Python module and creating a symlink.

For reference, the main() function from _ls.py looks something like this:

def main(args=None, orig_cmd_path=None):
    """very basic xml directory listing"""
    if len(args) > 1:
        target_dir = args[-1]
        if not os.path.isdir(target_dir):
            raise SystemExit('%s is not a directory' % (target_dir,))
    else:
        target_dir = os.getcwd()

    root = ET.Element('directory', name=target_dir)
    for fn in os.listdir(target_dir):
        stat = os.stat(os.path.join(target_dir, fn))
        f_el = ET.SubElement(root, 'file', mtime=str(stat.st_mtime))
        ET.SubElement(f_el, 'name').text = fn
        ET.SubElement(f_el, 'size').text = str(stat.st_size)
    ET.ElementTree(root).write(sys.stdout, 'utf-8')
    sys.stdout.write('\n')

5. Example

ben$ sudo pip install which xmlcmd
(yup, it's on PyPI) will install the xmlcmd Python package (and the 'which' dependency), and an xmlcmd wrapper script which should end up on the path. With that done, you can now create the magic symlinks:
ben$ ln -sf $(which xmlcmd) ~/bin/ls
ben$ ln -sf $(which xmlcmd) ~/bin/ps
And now, assuming things are working properly (a quick hash -r/rehash can't hurt), you should now be able to do wonderful things like this:
ben$ ps --xml aux | xpath '//process/command/text()[../../cpu > 2.5]'
which in this case displays the command name of all processes currently taking more than 2.5% of the CPU. Sure the XPath isn't exactly elegant. But the point is that patterns of this micro-language would be shared between tasks, and manipulating structured data on the UNIX command line would become as easy as text manipulation is now.

Here's some they made earlier...

Having said and done all that, a few searches later (for 'posix commands' in this case) brought up xmlsh.org, which seems to do some very similar things.

I also found (via [2]) xmltk, which at first glance seems to have beaten me to these ideas by about 9 years... :-)

Notes

[1]
'Regular expressions are notations for describing patterns of text and, in effect, make up a special-purpose language for pattern matching.' Brian Kerninghan, Beautiful Code (ed. Andy Oram & Greg Wilson, O'Reilly Media Inc).
[2]
The Art of Unix Programming, Eric S. Raymond. Especially the 'Rule of Composition'; see Chapter 1. (Note this book also praises text of course...)
[3]
What a pointlessly long sentence.
[4]
POSIX 2 (Commands and Utilities) covers these, e.g. see reference here

Monday, 21 March 2011

XPath is to XML as regex is to text

Anyone who has been a developer for a while gets familiar with regular expressions. They eat text for breakfast, and spit out desired answers. For all their cryptic terseness, they are at least in part reusable, and are based on only a handful (or two...) of rules. 'regexes' are a domain-specific micro-language for searching and filtering text.

But once we get outside of text, what is there?

With XML, we have XPath. I had one of those light-bulb realisations recently that what regexes are to text, XPath is to XML. And it made me think:

Why would I want to use a data substrate which doesn't have such a tool?

What I mean is this; text has regex. XML has XPath. RDBMS have SQL. Markup language of the day has... oh, it doesn't. Not really, in the sense of a standardised domain-specific micro-language. Regular expressions, XPath and SQL have history and mindshare. They 'work' specifically because they are DSLs, rather than high-level code. (OK, SQL is pushing it further than I'd like here, but it's still a DSL. Just not a micro-sized one.) To me, this is a problem which many 'NoSQL' tools have. I want the features of them, but CouchDB wants me to write map-reduce functions in JavaScript. MongoDB wants me to use a JSON-based query language. There is no commonality; no reuse; no lingua franca which will let me abstract the processing concepts away from the tools. Perhaps that will come in time for more data-representations (this seems to be an attempt for JSON, for example), but there is a significant barrier before such a tool gains widespread acceptance as a common abstraction across an entire data layer.

Pipelines and Data Processing

The 'UNIX philosophy' of connecting together a number of single-purpose programs to accomplish larger tasks is one of the keys to its power. These tools can be plugged together in ways which the original creators may never have thought of. Tools such as sed and awk are often employed as regex-based filters to command pipelines. I wish more tools had XML output options, because the tools we use in our pipelines often output structured data in textual format, often in tabular form. Tables are great for human consumption (provided they are modest in size), but when we start getting empty columns, cells flowing onto multiple lines, and other inconsistencies, it becomes a pain to parse. How great things could be if every tool following subversion's lead and had an --xml option:

svn diff -r $(svn log --stop-on-copy --xml | xpath -q -e '//log/logentry[last()]/@revision' | cut -d '"' -f 2):HEAD
(This command does a diff from a branch base to the most recent revision.  It still does some basic text processing, because the end result of XPath expressions are still text nodes).

Just imagine if POSIX defined an XML schema for each relevant command, and mandated an --xml option. Life would be so much easier. In many environments, data is structured but we still represent it as text. The pipeline philosophy might be nice, but it isn't exploited to the full when we need to write convoluted awk scripts and inscrutable regular expressions (or worse, Perl ;) ) to try and untangle the structure from the text. Consider something straightforward like the output of 'mount' on a *nix box. On my Mac it looks like this:

ben$ mount
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
/dev/disk1s1 on /Volumes/My Book (msdos, local, nodev, nosuid, noowners)

This is structured data, but getting the information out of that text blob would not be trivial, and would probably take many minutes of trial and error with regexes to get something reasonable. And the crucial thing is that you couldn't be sure it would always work. Plug a new device in which gets mounted in some new and interesting way, and who is to say that the new output of mount won't suddenly break your hand-crafted regex? That's where XML shines. Adding new information doesn't change anything in the old information. The way to access it doesn't change. Nothing breaks in the face of extension. Compare this to something like CSV, where the insertion of an extra column means all the indices from that column onwards need to change in every producer and consumer of the data.

XML and the Web

I'm somewhat saddened that XHTML didn't win outright in the last decade, and that XML on the web never really took off. I spent months at a previous job trying to convince everyone that 'XML-over-HTTP' was the best thing since sliced bread. A single source of data, which could be consumed by man (via XSLT & CSS in the browser) and machine alike. Just think how much energy the likes of Google could save if our web content didn't focus almost entirely on human consumption and discriminate against machines ;-)

One interesting thing which has happened as XML-on-the-web has declined is the increase in use of CSS selectors, first via frameworks such as Sizzle (used in jQuery), and later in the standard querySelectorAll DOM method. There is clearly a need for these DSL micro-languages, and as the 'CSS selector' DSL shows, they can quickly establish themselves if there is a clear need and sufficient backing from the community. Also apparent is that existing solutions can be usurped - users could do virtually everything CSS selectors could do (and far more besides) with XPath, but that didn't happen. Simplicity won here. But just because XPath was (arguably) wrong for Web development, doesn't mean it is wrong everywhere, and I contend that there are places where we have over-simplified, forcing regular expressions and text manipulation to (and beyond) breaking point, when XML processing would make things simpler everywhere.

Conclusion

In terms of practicalities, if I had ever spent too long in the world of Java, I would probably see XML as an unwelcome and persistent pest. But living in the happier climes of Python-ville, I have access to the wonderful ElementTree API, via both ElementTree itself (included in the standard library) and lxml.

Both of these support XPath as well as high-level abstractions of XML documents to and from lists and dictionaries. With ElementTree, XML access from Python is (almost) as easy as JSON access from JavaScript. And with technologies like XPath and XSLT available, I think it's worth it.

As a final thought, I've just had a quick glance through Greg Wilson's excellent Data CrunchingImage, which contains chapters on Text, Regular Expressions, Binary data (rather a short ad-hoc chapter), XML, and Relational Databases. Perhaps the 'binary data' chapter is short because there simply aren't many patterns available. There is no language to describe unabstracted data. And perhaps when we consider the data layer we should be using, we should think not only of features and performance, but also the power, expressiveness, and concision of the languages available to reason about the information. Perhaps too often we settle for a lowest common denominator solution (text) when a higher level one might be more powerful, especially if we don't have to give up on the concepts of fine-grained interoperability which micro-DSLs such as XPath give us.

To be continued...