Materials scientists use microscopy modalities to determine the structure,
order, and periodicitiy that affect properties. The core challenge is that only
a fraction of microscopy data is published. CRISPS will develop tools for
schema-free metadata exploration, and will allow microscopy images to be
searched and compared based on similarity of physics-aware features.
The goal of this interdisciplinary project is to develop methods, algorithms, and tools to support the retrieval of datasets by users who are not domain experts. For our purpose, users could be scientists, data journalists or community members. The project will engage with users to determine their needs, develop novel methods for indexing and querying datasets, and augment datasets with additional information to help users who lack the approrpriate vocabulary to effectively search for datasets. More here...
This paper fully considers the structure of tables when creating neural representations of them by incorporating both vertical self-attention and a novel concept of horizontal self-attention. The resulting representations can be effectively used in table matcin table matching tasks and keyword-based retrieval tasks.
Zhiyu Chen, Haiyan Jia, Jeff Heflin and Brian D. Davison. Generating Schema Labels through Dataset Content Analysis. In Companion Proceedings of the The Web Conference (WWW '18), pages 1515-1522. Presented at the International Workshop on Profiling and Searching Data on the Web (Profiles & Data:Search'18, co-located with The Web Conference), Lyon, France, April, 2018. Best workshop paper award.
Many datasets have opaque attribute/column names and some are missing such name altogether. This paper presents an approach to automatically augment datasets with more informative schema labels that could later be used to match queries to tables or to determine similarity between tables. We identify a set of curated features, many of which considers the cell values in a column, and train a random forest model.
This paper describes how to improve the speed of determining mappings between objects described in RDF (although it can be easily applied to any graph data). The process requires no domain-specific information other than what classes and properties are comparable, which can be found in existing ontologies or by ontology-alignment techniques. We show that mappings between 1 million instance can be performed in under one hour on a Sun workstation. Surprisingly, this high recall, low precision filtering mechanism frequently leads to higher F-scores in the overall system.
This paper describes a domain-independent approach to determining the data quality of graph data. The approach first learns probable functional dependencies in the graph, considering a fuzzy matching of values to account for some variation in the data. These functional dependencies are then used to test for data that does not fit the pattern. Experimental tests identified over 2800 anomalous triples in DBPedia, and investigation of a random sample found that 86.5% of these were actual errors.
This paper describes an algorithm that uses the structure of a rule-goal tree expressing the rewrites of a given query to efficiently locate the relevant sources. It starts with the most selective query nodes, and incrementally loads sources, using the information to refine queries of subsequent sources. Our experiments show that this algorithm can answer many randomly-generated complex queries against 20 million heterogeneous data sources in less than 30 seconds.
This is the first paper to discuss our attempts to realize the vision of the Semantic Web as a Web-scale query-answering system. We loaded nearly 350,000 real-world semantic web documents that committed to 41,000 ontologies into our DLDB system and then used additional "mapping ontologies" to integrate them. This experiment yielded promising results in that query times ranged from a few milliseconds to 5 seconds.
This is the definitive reference on the Lehigh University Benchmark (LUBM) and on empirical evaluation of Semantic Web knowledge base systems in general. This journal article coalesces the results from the ISWC 2003 and ISWC 2004 papers, the latter of which won the best paper award at the conference. In addition, it includes a discussion of preliminary tests on Jena and SPARQL versions of the benchmark queries.
The Semantic Web is a vision for extending the Web so that machines
can more intelligently integrate and process the wealth of information
that is available. Unlike HTML and ordinary XML, Semantic Web languages
such as SHOE,
DAML+OIL, and
OWL
(a W3C Recommendation),
allow semantics (i.e., meaning) to be explicitly associated
with the content. The semantics are formally specified in ontologies,
which can be shared via the Internet and extended for local needs.
The SWAT lab is at the forefront of Semantic Web research
by studying issues such as interoperability of distributed
ontologies, ontology evolution, and system architectures and tools
for the Semantic Web. See the group's homepage
for details.
Phi Beta Kappa, Alpha of Virginia Chapter (inducted 1992)
Information for Prospective Graduate Students:
Please do not send me e-mail asking me to evaluate your chances of
admission to the department. I typically do not respond to such requests.
If you are interested in joining my research group, then send me an
e-mail that specifically describes what you would like to do and what prior
qualifications you have. However, I recommend that you read some of my
publications and explore our
current research first. If I
think your interests match our research, then I will contact you for
further information.
Semantic Web Resources:
The Semantic Web by Tim Berners-Lee, James Hendler, and Ora Lassila
The Scientific American article that presents the vision of the Semantic Web.
State of the LOD Cloud
Statistics about the Linked Open Data cloud provided by Freie Universitat Berlin. Linked Open Data is real data in Semantic Web form and is growing daily. Thesestatistics are typically updated once a year.
Semantic Web Case Studies and Use Cases
A continually growing list of applications of Semantic Web technology collected by the W3C. Case studies are actually deployed systems, while use cases are prototype systems.
SemanticWeb.org
A Semantic Wiki for the Semantic Web community. Includes information on tools, ontologies, people, and events.
Semantic Web Activity at W3C
The World Wide Web Consortium's collection of specifications, working groups, and resources related to the Semantic Web.
SemWebCentral
A web site for non-developers to learn about the Semantic Web and for developers to share Semantic Web tools.