Mapping to one-dimensional values and then using a one-dimensional indexing method has been proposed as a way of indexing multi-dimensional data. Most previous related work uses the Z-Order Curve but more recently the Hilbert Curve has been considered since it has superior clustering properties. Any approach, however, can only be of practical value if there are effective methods for executing range and partial match queries. This paper describes such a method for the Hilbert Curve.
The Hilbert Curve describes a method of mapping between one and $n$ dimensions. Such mappings are of interest in a number of application domains including image processing and, more recently, in the indexing of multi-dimensional data. Relatively little work, however, has been devoted to techniques for mapping in more that 2 dimensions. This paper presents a technique for constructing state diagrams to facilitate mappings and is a specialization of an incomplete generic process described by Bially. Although the storage requirements for state diagrams increase exponentially with the number of dimensions, they are useful in up to about 9 dimensions.
This report reproduces and briefly discusses an algorithm proposed by Butz for calculating a mapping between one-dimensional values and n-dimensional values regarded as being the coordinates of points lying on Hilbert Curves. It suggests some practical improvements to the algorithm and presents an algorithm for calculating the inverse of the mapping, from n-dimensional values to one-dimensional values.
This paper describes the use of Hydra to program a facility to find database values which are directly or indirectly associated with each one of a list of parameter values.
Current approaches to computation using visual data representations usually access a textual language for complex tasks, or concentrate on the simpler queries that can be easily expressed with the present visual techniques.
We describe an implementation of Spider, an experimental visual database programming language. It uses a graph rewriting paradigm as a basis for a fully visual, computationally complete language. The graphs it rewrites represent the schema and instances of a functional data model. Application nodes are added to this graph and initiate transformations which delete and create primitives, including the application nodes.
Separate visual tools are included for defining the database instance and schema data, and transformations. A window displays the graph rewriting process, including the final results.
This paper discusses a new approach to Temporal Database modelling using the notion of a real world event as primitive,the target implementation language being TFDL. We believe that our approach greatly improves the semantic content of the database.
This paper presents an approach to providing an interface to a functional database which allows novice users to query the database without the need to be familiar with a query language or the database schema. We use the set of types and functions in the database schema as the basis for generating a set of default queries. This set of default queries may then be modified by the database designer to select the most appropriate queries and to add further queries which were not automatically generated. The queries are organised in the form of a lattice such that similar queries are closely related. This lattice then forms the basis of a graphical interface to the database from which queries can be invoked and further refined.
The basic premise of this work is that an important feature of dynamic multi-dimensional file organisations should be a facility for tuning performance on the basis of information provided by database administration. In this paper we suppose that such information is given as probability distributions for both the form of, and specified values in, queries to be made to the system. For a class of grid file methods we show how the information supplied can be exploited. Of existing optimisation methods in the literature, none are totally assumption-free with respect to these probability distributions. The method we give, however, is totally assumption-free.
This dissertation investigates and discusses issues relating to the locking of triple stores in a multi-user environment.