There are currently 0 users and 220 guests online.
Rants, raves (and occasionally considered opinions) on phyloinformatics, taxonomy, and biodiversity informatics. For more ranty and less considered opinions, see my Twitter feed.ISSN 2051-8188 View this blog in Magazine View.
Last update1 hour 20 min ago
June 11, 2014
June 10, 2014
I've been involved in a few Twitter exchanges about the upcoming pro-iBiosphere meeting regarding the "Open Biodiversity Knowledge Management System (OBKMS)", which is the topic of the meeting. Because for the life of me I can't find an explanation of what "Open Biodiversity Knowledge Management System" is, other than vague generalities and appeals to the magic pixie dust that is "Linked Open Data" and "RDF", I've been grumbling away on Twitter.
So, here's my take on what needs to be done. Fundamentally, if we are going to link biodiversity information together we need to build a network. What we have (for the most part) at the moment is a bunch of nodes (which you can think of as data providers such as natural history collections, databases, etc., or different kinds of data, such as names, publications, sequences, specimens, etc.).
We'd like a network, so that we can link information together, perhaps to discover new knowledge, to serve as a pathway for analyses that combine different sorts of data, and so on:
A network has nodes and links. Without the links there's no network. The fundamental problem as I see it is that we have nodes that have clear stakeholders (e.g., individual, museums, herbaria, publishers, database owners, etc.). They often build links, but they are typically incomplete (they don't link to everything that is relevant), and transitory (there's no mechanism to facilitate persistence of the links). There is no stakeholder for whom the links are most important. So, we have this:
This sucks. I think we need an entity, a project, and organisation, whatever you want to call it for whom the network is everything. In other words, they see the world like this:
If this is how you view the world, then your aim is to build that network. You live or die based on the performance of that network. You make sure the links exist, they are discoverable, and that they persist. You don't have the same interests as the nodes, but clearly you need to provide value to them because they are the endpoints of your links. But you also have users who don't need the nodes per see, they need the network.
If you buy this, then you need to think about how to grow the network. Are there network effects that you can leverage, in the same way CrossRef has with publishers submitting lists of literature cited linked to DOIs, or in social media where you give access to your list of contacts to build your social graph?
If the network is the goal, you don't just think "let's just stick HTTP URLs on everything and it will all be good". You can think like that if you are a node, because if the links die you can still persist (you'll still have people visiting your own web site). But if you are a network and the links die, you are in big trouble. So you develop ways to make the network robust. This is one reason why CrossRef uses an identifier based on indirection, it makes it easier to ensure the network persists in the face of change in how the nodes serve their data. What is often missed is that this also frees up the nodes, because they don't need to commit to serving a given URL in perpetuity, indirections shields them from this.
In order to serve users of the network, you want to ensure you can satisfy their needs rapidly. This leads to things like caching links and basic data about the end points of those links (think how Google caches the contents of web pages so if the site is offline you may still find what you are looking for).
If your business depends on the network, then you need to think how you can create incentives for nodes to join. For example, what services can you offer them that make you invaluable to the nodes? Once you crack that, then all sorts of things can happen. Take structured markup as an example. Google is driving this on the web using schema.org. If you want to be properly indexed by Google, and have Google display your content in a rich form (e.g., thumbnails, review ratings, location, etc.) you need to mark up your page in a way Google understands. Given that some businesses live or die based on their Google ranking, there's a strong incentive for web sites to adopt this markup. There's a strong incentive for Google to encourage markup so that it can provide informative results for its users (otherwise they might rely on "social search" via Facebook and mobile apps). This is the kind of thing you want the network to aim for.
In summary, this is my take on where we are at in biodiversity informatics. The challenge is that the organisations in the room discussing this are typically all nodes, and I'd argue that by definition they aren't in a position to solve the problem. You need to pivot (ghastly word) and think about it from the perspective of the network. Imagine you were to form a company whose mission was to build that network. How would you do it, how would you convince the nodes to engage, what value would you offer them, what value would you offer users of the network? If we start thinking along those lines, then I think we can make progress.
June 7, 2014
I'm adding more charts to the GBIF Chart tool, including some to explore the type status of specimens from the Solomon Islands. There are nearly 500 holotypes from this region, so quite a few new species have been discovered in this region.
Inspired by the Benoît Fontaine et al. paper on the lag time between a species being discovered and subsequently described (see Species wait 21 years to be described - show me the data) I thought I would do a quick and dirty plot of the difference between the year a specimen was collected and the year the name of the taxon it belongs to was published (from the authorship string for the scientific name). Plotting the results was *cough* interesting:
In theory, the difference between the two dates should be negative (if you subtract publication year from collection year), the smaller number the less the wait for description. But I found some large positive numbers, implying that taxa had been described long before the types were discovered! Something is clearly wrong. What seems to be happening here is the GBIF has failed to match the species name for an occurrence, and so goes up the taxonomic hierarchy and just records the genus. For example, http://gbif.org/occurrence/472764211 was collected in 1965 and is the type of Pandanus guadalcanalius St.John. GBIF doesn't recognise this name, and so matches the occurrence to the genus Pandanus Linnaeus, 1782. hence it looks like we've used a time machine to describe a taxon in 1782 based on a specimen from 1965.
At the other end of the spectrum, there are a lot of specimens that seem to have waited over 200 years for description! Turns out these are mostly specimens from the MCZ that have their collection date recorded by GBIF as "1700-01-01". This seems an arbitrary date, and turns out it's an artefact. The MCZ records "unknown" collection dates as the range 1700-01-01 - 2100-01-01
(see http://mczbase.mcz.harvard.edu/guid/MCZ:IZ:DIPL-4985). Unfortunately, when it generates the export for GBIF, these get truncated to 1700-01-01, and GBIF then (not unreasonably) treats that as the actual collection date. Somewhere in the middle of the plot of lag between collection and description is some interesting information, but it's a pity that most of this is obscured by some serious data errors.
For me the bigger lesson here is the power of visualisation to explore the data and to expose errors. This is why I was underwhelmed by the new charts GBIF is releasing. Plots of ever upward trends are ultimately not very useful. They don't give much insight into the data, nor do they help tackle interesting questions. I think we need a much richer set of visualisations to really understand the strengths and limitations of the data in GBIF.
Investigating further, there are some other reasons for the "back to the future" types. For example, http://www.gbif.org/occurrence/188826624 (CAS 5506 from FishBase) was collected in 1933 and is recorded as a holotype, with the scientific name Cypselurus opisthopus (Bleeker, 1865). 1933 - 1865 = 68, so the taxon was named 68 years before it was collected(!).
A bit of investigation using BioNames, BioStor, and GBIF (http://www.gbif.org/occurrence/473244692, another record for CAS 5506) reveals that CAS 5506 is the holotype for Cypselurus crockeri, shown below in a plate from it's original description (published in 1935):
Seale A (1935) The Templeton Crocker Expedition to western Polynesian and Melanesian islands, 1933. No. 27. Fishes. Proceedings of the California Academy of Sciences 21: 337–378. http://biostor.org/reference/59326
So, in fact this species was described shortly after its collection, with a lag of 1933 - 1935 = -2 years.
Apart from the duplication issue (FishBase has replicated some of the CAS dataset, sigh), the other problem is one of modelling the data. The CAS record has the original taxon name for which CAS 5506 is the type (Cypselurus crockeri), the FishBase record has the currently accepted name for the taxon (Cypselurus opisthopus). These two different approaches have very different implications for the charts I'm making, and simply reinforce my feeling that the GBIF data is both fascinating and full of "gotchas!".
June 6, 2014
Note to self on citation matching.
Looking for this paper "Fishes of the Marshall and Marianas islands. Vol. I. Families from Asymmetrontidae through Siganidae" I Googled it, adding "bistro" as a search term to see if I'd already added it to BioStor. The Google search:
found several hits in BioStor:
What is interesting is that these hits are to full text of references that cite the article I'm after, not the article itself. I'm sure many have had this experience, where you are searching for an obscure article and you keep finding papers that cite it, rather than the actual paper you're after. But this suggests another strategy for building the citation graph for an article. If you have a decent corpus of full text articles, search for the article (using, say title, journal, pagination) in the text of those articles and store the hits. Those are the references that cite the article (OK, not all, but some of them). This may be a more attractive way of building the citation graph, rather than parsing citations in articles and trying to locate them. Indeed, it could be extended to help marking up those citations. Imagine grabbing blocks of text from near the end of an article, searching for those in a database of citations, using close matches to flag the corresponding block as a citation.
Need to think about this a little more...
UpdateJune 8, 2014
The paper is:
Polepeddi, L., Agrawal, A., & Choudhary, A. (n.d.). Poll: A Citation Text Based System for Identifying High-Impact Contributions of an Article. 2011 IEEE 11th International Conference on Data Mining Workshops. IEEE. doi:10.1109/icdmw.2011.136/blockquote>
Following on from the previous post on visualising GBIF data, I've added some more interactivity. If you click on a pane in the treemap widget you get a list of the corresponding taxa, together with an image from EOL (if one exists). It's a fun way to quickly see what sort of species are present (in this case in the Solomon Islands). You can try it at http://bionames.org/~rpage/gbif-stats/.
It's not obvious from the site, but to go back up the taxonomic hierarchy in the treemap, right click (ctrl-click on a Mac) on the grey bar corresponding to the higher taxon.
June 4, 2014
Tim Roberston and the ream at GBIF are working on some nice visualisations of GBIF data, and have made an early release available for viewing: http://analytics.gbif-uat.org. For a given country, say, the Solomon Islands, you can see numerous plots, mostly like this:
Ever the critic, as much as I like this (and appreciate the scale of the task underlying doing analytics on data at the scale of GBIF), what I would really like to see is something that more closely resembles Google Analytics. I want graphs that I can use to get some insight into the data, and which lead me to ask questions (and provide easy for me to discover the answers).
So, I put together a crude, live demo of the sort of thing I'd like to see. You can see it at http://bionames.org/~rpage/gbif-stats (can't promise that this link will be long-lived), and below is a screen shot:
What I've done is fetch all the occurrence records for the Solomon Islands from GBIF (using the API), dumped that into CouchDB, and generated some simple queries. I display the results using Google Charts. There are some similarities with the tools developed by Javier Otegui, Arturo Ariño, and colleagues.
Otegui, J., Ariño, A. H., Encinas, M. A., & Pando, F. (2013, January 25). Assessing the Primary Data Hosted by the Spanish Node of the Global Biodiversity Information Facility (GBIF). (G. P. S. Raghava, Ed.)PLoS ONE. Public Library of Science (PLoS). doi:10.1371/journal.pone.0055144Otegui, J., & Arino, A. H. (2012, August 15). BIDDSAT: visualizing the content of biodiversity data publishers in the Global Biodiversity Information Facility network. Bioinformatics. Oxford University Press (OUP). doi:10.1093/bioinformatics/bts359
For fun I've also added a map of the GBIF occurrences (also served from CouchDB).
Here's quick guide to some of the charts. Below you can see (left) a plot of species accumulation over time, that is the total number of species that have been collected up to that time. If we had collected all the species we'd expect this to asymptote (flatten out). If it keeps going up, then we still need to do some sampling. On the right is the number of occurrences recorded for each year. You can see that collecting is highly episodic.
To get a little more information on this, I've generated a crude chart where the rows are institutions (e.g., museums and herbaria) that have specimens, and the number of occurrences collected each decade are represented by the shaded boxes (the rightmost box is the current decade) (if you hover over a bar you will see a popup with the decade). To the right is the total number of occurrences.
From this we can see that there have been some major collections at various times (e.g., Kew in the 1960's, the Australian Museum in the 1970's to 1990's). Strangely, the MCZ has lots of specimens from the 1700's, I suspect we have a data quality issue here. There are certainly some issues with dates in this data set, with about a quarter of occurrences with no date:
Note that the data for the Solomon Islands, comes from all around the world, mostly from the US. There is a big spike in the date of collection curve in 1944, suggesting a lot of material may be the result of collecting by US servicemen in WW2.
I use a treemap to display the taxonomic distribution of the records, and a donut chart to summarise the taxonomic level to which the occurrences are identified:
The treemap is dominated by vertebrates, which I suspect is a poor reflection of the actual taxonomic composition of the Solomon Islands biota. Over 3/4 of the occurrences are identified to species level, which is encouraging, but there's clearly a lot of material that needs some taxonomic work.
This has been made in a rush, and there is a lot which could be done. For example, some of the charts would be more useful if you could drill down and explore further. This could be done via the GBIF API or portal (for example, by constructing a URL that shows the portal results for the Solomon Islands for a given year of collection).
There are, of course, issues of scalability. I've made this for the 83,364 occurrences currently in the GBIF portal for the Solomon Islands. There would need to be some thought given to how this could be scaled to larger data sets. But I think this is worth pursuing so that we can get further insights into the remarkable database that GBIF is building.
June 2, 2014
It is almost a year to the day that I released BioNames, a database of "taxa, texts, and trees". This project was my entry in EOL's Computable Data Challenge. Since it went live (after much late night programming by myself and Ryan Schenk) I've been tweaking the interface, cleaning (so much cleaning), and adding data (mostly DOIs, links to BioStor, and PDFs). I also wrote a paper describing the project, published in PeerJ (http://dx.doi.org/10.7717/peerj.190).
I'm building BioNames to scratch a very specific itch. To me it is a source of enormous frustration that one of the most basic questions we can ask about a name (where was it first published?) is difficult to answer using current taxonomic databases. And if there is an answer, it is usually given as a text string describing the publication (i.e., a literature citation) rather than an identifier such as a DOI that enables me to (a) go to the publication, (b) refer to the publication in a database in an unambiguous way, and (c) discover further information about that publication by querying services that recognise that identifier.
There are enormous digitisation efforts underway by commercial publishers, digital archives, and libraries, and all of this is putting more and more literature online. This is the primary evidence base for taxonomy, it is where new names are published, taxa are described, and hypotheses of synonym and relationship are proposed, and we should be actively linking to it. Of course, there are some projects that do this, but these are typically restricted in taxonomic or geographic scope. I want all this information together in one place. Hence, BioNames.
Of course, I could wait until projects like ZooBank have all the animal names, but as I pointed out in Why the ICZN is in trouble, the ICZN and ZooBank have only a tiny fraction of the published names:
This renders ZooBank barely usable for my purposes. There are millions of animal names in circulation, and our inability to discover much about them leads to all sorts of headaches, such as the errors in GBIF that I've mentioned earlier on this blog. I want a tool that can help me interpret those errors, and I want it now, hence BioNames.
What is in BioNames?
The original data comes from the LSID metadata served by ION. At the moment BioNames has 4,880,925 names, 1,549,152 of which are linked to a bibliographic citation. The bulk of the time I spend on BioNames consists of cleaning and clustering these citations, and linking them to digital identifiers.
To get some insight into what is left to be done I created a CSV dump of the publication data underlying BioNames, and loaded it into Google's Cloud Storage (http://storage.googleapis.com/ion-names/names3.csv). I then used Google's BigQuery to write some simple SQL queries. You can find more details here: https://github.com/rdmpage/bionames-bigquery.
Here is a summary table of the number of names that are published in an article with one of the identifiers that I track. These include DOIs, PMIDs, as well as whether the article is in BioStor, has a URL (typically to a publisher's web site), or a PDF.
IdentifierNumber of namesDOI196,915BioStor130,792JSTOR23,483CiNii11,296PMID8,886URL72,754PDF161,474(any)489,029
The final row is the number of articles that have at least one identifier (some articles have multiple identifiers, such as a DOI and a link to BioStor). Given that there are approximate 1.5 million names with bibliographic citations, and around 490,000 have an identifier, the user as a 30% chance of finding the original description for an animal name picked at random. Obviously, BioNames has gaps (ION has missed a number of names, and/or publications), the taxonomic coverage of bibliographic identifiers is uneven (depending on the publications chosen by taxonomists to publish in, and the level of digitisation of those publications), and there is still a lot of data cleaning to do. But an almost 1 in 3 chance of finding something useful for a name seems a reasonable level of progress.
Out of interest I created some quick and dirty charts in Excel for different categories of identifier. Here, for example, is the percentage of names published each year that are linked to a publication with a DOI:
Over 80% of names published in 2013 were in an article with a DOI, so we are fast heading to a situation where modern zoological taxonomy is fully part of the citation graph of science. Much of this spike in 2013 is due to the adoption of DOIs by Zootaxa, which is far and away the dominant journal in animal taxonomy.
Here is the same chart for publications in BioStor.
The big spike at the start is for names where the year of publication is missing. Leaving that aside, we can see the impact of the 1923 copyright cut-off in the US, which puts a big dent in the Biodiversity Heritage Library's digitisation efforts. Note, however, that BHL has a lot of post-1923 content.
Does anyone use BioNames?
I use BioNames almost every day, and have devoted way more time than is healthy to populating it. As I explore issues like the quality of the taxonomy in GBIF, I find it useful to see the original descriptions of a taxa, and its fate in subsequent revisions. In the early days I'd spend more time adding missing papers to help answer a question, but increasingly I'm finding that the content is already there. So, I find it useful, but what (gulp) if I'm the only one?
Below is the number of "sessions" per day since BioNames was launched (data from Google Analaytics for May 1st, 2013 to May 31st, 2014). After an initial flurry of interest, web traffic pretty quickly died off. Since then it's been slowly gaining more visitors, then (for reasons which escape me), it started getting a lot more traffic in April onwards:
To give these numbers some context, for the same period BioStor (my archive of articles from BHL) had the following traffic:
Note the different scales, BioStor is getting around 500 sessions a day during week days, BioNames gets around 200. By way of comparison, GBIF gets up to 4000 sessions a day, and this blog typically has 50-100 sessions per day.
There are a couple of directions for the future. There is still a lot of data cleaning and linking to do. Last year I did a quick analysis of which taxonomic journals should be digitised next. I've updated this by creating a a spreadsheet that ranks the journals in BioNames by the number of names each has published, and each is coloured by the fraction of those names for which I've found a digital identifier for the paper in which they are published. This table is incomplete, and reflects not only the extent of digitisation, but also the extent to which I've managed to locate the journals online. But it is a starting point for thinking about what journals to prioritise for digitisation, or if they are already divitised, journals that I need to target for addition to BioNames. The spreadsheet is available as a Google sheet.
Another direction is data mining. In addition to the obvious task, naming locating and indexing taxonomic names, there are other things to be done. In BioStor I extract geographic point localities and specimen codes from the OCR text. These could be indexed to enable geographic or specimen-based searching. The same approach could be generalised to the literature in BioNames, so that we could track the mentions of a particular specimen, or retrieve lists of publications about a specific locality (e.g., all taxonomic papers that refer to a particular mountain range, deep sea vent, or island).
BioNames also does some limited analysis of taxonomic name co-ocurrence, for example suggesting that species names with the same specific epithet but different generic names are possible synonyms if they occur on the same page. There is a lot of scope for expanding this. I'm also keen to explore citation indexing, that is, extracting lists of literature cited from articles in BioNames, and linking those to the corresponding record in BioNames. Ultimately I want to be able to navigate through the taxonomic literature along these citation links, so that we can trace the fate of names through time.
But this is still only a start, papers such as Seltmann et al. illustrate other things that are possible once we have a large corpus of taxonomic literature available:
Seltmann, K. C., Pénzes, Z., Yoder, M. J., Bertone, M. A., & Deans, A. R. (2013, February 18). Utilizing Descriptive Statements from the Biodiversity Heritage Library to Expand the Hymenoptera Anatomy Ontology. (C. S. Moreau, Ed.)PLoS ONE. Public Library of Science (PLoS). doi:10.1371/journal.pone.0055674
So, a lot still to be done. I hope to have achieved some of this if and when I write a follow up post on the status of BioNames in a year's time.
May 15, 2014
.@catapanoth Agreed, it’s just that I think there’s a much bigger win to be had by exploiting existing services instead of going it alone…
— Roderic Page (@rdmpage) May 15, 2014I had a long Twitter conversation with Terry Catapano (@catapanoth) today, and as can happen with a distracted stream of tweets, I think we were a little at cross purposes. This blog post is an attempt to unpack the debate.
What prompted the conversation was the following paper:
Emery, Carlo et al (1899). Formiche di Madagascar raccolte dal Sig. A. Mocquerys nei pressi della Baia di Antongil (1897-1898).. Bullettino della Società Entomologica Italiana: 31 (1899) pp. 263-290. 10.5281/zenodo.9785Not the paper so much, as the fact that it is stored on the Zenodo repository (which I was only looking at because of the announcement that GitHub now supports DOIs through Zenodo). Given that the PDF for Emery's paper was uploaded by the Plazi project, I wondered what was the intention of assigning a Zenodo DOI to this paper, rather than one from CrossRef.
Not all DOIs are equal
As Geoffrey Bilder notes in his post DOIs unambiguously and persistently identify published, trustworthy, citable online scholarly literature. Right?
...some have adopted a cargo-cult practice of seeing the mere presence of a DOI on a publication as a putative sign of “citability” or “authority.” There is a danger that we fall into the trap of thinking that all we need to do is slap a DOI on a paper and all the good things that we associate with DOIs will magically happen. This isn't the case. Not all DOIs are the same. Zenodo DOIs are proved by DataCite, and DataCite DOis don't have all the features that CrossRef provides for their DOIs.
CrossRef provides some key services, one of the most important is discoverability. Given a bibliographic references, CrossRef has tools that can find whether it has a DOI (e.g., http://search.crossref.org). I use this a lot to map taxonomic papers to DOIs (by a lot I mean searching for DOIs for tens of thousands of articles). Most people don't do this, but you benefit from this service every time you read an article and see the literature cited section decorated with DOIs. Publishers use CrossRef's tools to convert citations from dumb strings to useful links. This feature we come to expect from any modern article relies on CrossRef have definitive metadata for lots (millions) or articles, all of which have DOIs. When publishers submit article metadata when registering their DOIs, they usually submit lists of literature cited (and the DOIs). This means that CrossRef is building a citation database, which you can see if you visit the web page for an article and see a "cited by" link.
Then there are additional services. Given that CrossRef has high quality bibliographic metadata for articles, if you have a DOI there is no need to type in the details of a paper. Most bibliographic software such as Mendeley and Zotero can take a DOI and flesh out those details for you. If a DOI fails to resolve, you can contact CrossRef Support and have somebody investigate. Then there are the new services such as FundRef and Prospect, which provide information on who funded a paper, and what text and data mining rights are available for a paper.
Why use DOIs?
The rationale for using DOIs for articles is so that they can be unambiguously identified, which in turn means we can build a robust citation network. But this requires infrastructure, and that is what CrossRef provides through tools like citation to DOI matching. Other DOI registration agencies don't do this, and CrossRef isn't aware of other DOIs, so putting, say, a DataCite DOI (such as those used by Zenodo) on an article doesn't achieve the primary goal of a DOI (embedding it in the citation graph of academic literature).
Hence, I regard putting a Zenodo DOI as basically a wasted opportunity. If we aren't making the primary biodiversity literature discoverable, and hence linkable, then all we are doing is keeping that literature in a ghetto (and reinforcing the impression that this literature, and taxonomy itself, really doesn't matter). It is striking that if you read a recent paper that describes a new species, the bulk of the systematic or ecological literature has DOIs, but the bulk of the taxonomic literature does not. If it doesn't have a CrossRef DOI, it's effectively invisible. All academic literature should get first class DOIs. Whether it's "legacy" or not is irrelevant, the Royal Society of London has DOIs on articles going back to 1800, these are now as accessible as any paper published today.
Eyes on the prize
So, if we are going to bring the taxonomic literature into the mainstream, make it discoverable and citable, then we should focus on bringing that literature into CrossRef's infrastructure. Archives like JSTOR do it, the Biodiversity Heritage Library (BHL) does it for some of its content (and they should be doing it at article level, right now).
One response to this is to say "but doesn't this cost money?" Of course it does. Everything does, nothing is free. What frustrates me most about this is that it's the wrong question. The first question should not be "how much does this cost?". If it is, you've already lost sight of the goal. Instead, we should be asking, "What do we want? Where do we need to be able to do to progress our field?". Once we articulate that, then we figure out how to pay for it. And we figure that out because we've decided this is what we need.
I think we want discoverable, citable taxonomic literature, embedded in the rest of the scientific literature and the publishing process. We don't get that by simply buying the cheapest DOIs available and slapping them on articles. To do so is to fundamentally misunderstand why DOIs matter, and to ignore the role that infrastructure plays in their success in academic publishing.
May 6, 2014
As announced on phylobabble I've started to revisit visualising large phylogenies, building on some work I did a couple of years ago (my how time flies). This time, there is actual code (see https://github.com/rdmpage/deep-tree) as well as a live demo http://iphylo.org/~rpage/deep-tree/demo/.
You can see the amphibian tree below at http://iphylo.org/~rpage/deep-tree/demo/show.php?id=5369171e32b7a:
You can upload or paste a tree (for now in NEXUS format), or paste in a URL to a NEXUS file (e.g., from TreeBASE). I'll formats when I get the chance. The viewer uses the same approach as Google Maps, breaking the image of the tree into "tiles" of a fixed size, so even if the tree image is huge, the web browser only ever displays the same number of tiles. You can zoom in to see individual taxa, or zoom out for an overview. One reason I'm building this is to display DNA barcoding trees alongside the million DNA barcode map.
As ever this is a crude first attempt, but feel free to try it and let me know how you get on.
The Genealogical World of Phylogenetic Networks
BMC Evolutionary Biology