The web is a complicated beast - a mess of interlocking and competing standards of unstructured data. And while someday there is a hope that this will become at least somewhat more structured so computers can pull useful information out of it, right now it's hard to work with. I've been working with the web since it became popular in the early 90s and since that time it has done nothing but become more complicated.
The problem with this complexity is that it has become increasingly difficult for academics and anyone new to the Web to quickly jump into studying its aspects. For instance, in information retrieval (the father of the now popular "search") most scientists were very used to working within the bounds of written text. Therefore, most early web research was incredibly text-centric. Unfortunately, the web is not text and everyday that becomes more and more the case. Even the task of figuring out how to apply text analysis to a less structured layout language like HTML is not an easily solved problem.
Luckily, the growth of open-source as a programming paradigm has led to the creation of many free tools that makes this a much more plausible task. HTML parsers, classification tools, IO libraries, etc. have been the tools of a web researchers trade over the past years. However, each of these tools has pros and cons and oftentimes you just choose the one that works first. As an academic, I can tell you that I have spent a scarily large number of hours during my graduate studies just evaluating tools.
So after you find tools, you then have to figure out how to get them to work. For some tools, this is very easy, but for others there is a serious investiture of time necessary to just get it working, let alone integrated with whatever project you're attempting. Documentation is often scarce or completely absent and you're forced to do a lot of trial and error to see if things do what you expect them to.
So finally you have something working and you perform whatever task you need to do. Maybe you write a paper on whatever it does; maybe you just share the results with your group/department/class. However, you almost always glaze over the real core of what is being done to prove your point. Each one of the libraries that you chose work in sometimes scarily different ways to produce the eventual results that you obtain. So how is one to believe your results? But they do, because it's "close enough" and they really didn't want to look at the nitty-gritty details anyway.
Finally, now that you are done with the project and especially if this is something academic, you throw it away. Or if you're good, you archive it somewhere so that you or some one who works on something similar can pretend to look at it, quickly decide its not what they need, and build something else from scratch.
This is what I see happening a lot in web analysis projects and it makes me sad. webseer was built to try to solve some of these problems that I have been working with the last seven years of my graduate research. People who are studying the Web should spend most of their figuring out cool new things that drive advancements, not figuring out tools or questioning their own underlying methodology.
So on a high level, webseer is a glue and standardization of web tools. It's not for everyone. I understand that some people hate getting extra baggage with their tools and while I try very hard to hide stuff that you aren't interested in, if you just need a specific HTML parser, you should just use that parser and not mess with webseer. But if you are doing something more complex or find a certain web library confusing, webseer may come in handy.