Stefan van den Berg
The increasing popularity of VGI datasets is the motivation to research what the quality of this data type is relative to professional geographic data. This research assesses the quality of two professional datasets (Google Maps and Locatus) and one VGI dataset (Openstreetmap) in the city of Utrecht, The Netherlands. Overall, the quality of the Locatus dataset is relatively of the highest quality. The other datasets have a comparable intrinsic quality but differ a lot in pragmatic quality which is moderate for Openstreetmap and poor for Google Maps. The extrinsic quality analysis of Openstreetmap results in high trustworthiness of this dataset in the study area, because almost all contributions in this area come from experienced mappers.
Martijn van der Putten
My thesis investigated the quality of bus route data in OpenStreetMap, compared to data delivered by transport operators. Real-time locations of vehicles were used to validate both datasets. It was concluded that the data delivered by transport operators was slightly better than the public transport data in OpenStreetMap. Given the fact that the data in OpenStreetMap is created by volunteers and by definition too late the quality of the OpenStreetMap data was quite good.
Ingmar de Beukelaar
The goal of this thesis was to investigate the new opportunities and challenges that Vector Tiles offer for Web Cartographers. Several existing Vector Tile tools and technological solutions on how to implement Vector Tiles in Web Mapping were inventoried. Afterwards, two different workflows were assessed in terms of cartographic strengths & weaknesses. The aim was to investigate the cartographic potential of Vector Tile technology and solutions. The challenges of the thesis were to give an overview of the current state of Vector tile technology and to fill the knowledge gap between computer sciences and cartography by combining practical research regarding the emerging Vector Tile technology with cartographic theory.
Davey den Haan
The unlocking of the full potential of open data has yet to be realized. This is mainly because users face barriers when using open data. While research has focused on finding solutions, the barriers are still present. Some researchers suggested intermediaries to be key in solving the barriers for users, but there is a lack of research on their role. Therefore, the research analyzed different types of intermediaries and their effect on removing or reducing barriers found in open data use. The result was an overview of different types of intermediaries and the barriers where they have an effect.
The qualitative exploratory thesis presents a comparison between the emotional experience of tourists in touristic and non-touristic areas, based on physiological observations through skin conductance measurements. The ‘new urban tourism’ theory is implying that tourists’ interests are changing over to the non-touristic areas and visitors are getting more enthused by walking in an undiscovered and surprising novel urban surrounding outside the touristic space. By gathering data on the skin conductance level (SCL) of tourists and visualizing it in GIS, potential arousing spots can be noticed within the city, comparing both the touristic Old-Town route and the Outside the Old-Town route located out of the touristic area. The resulting outcomes match the statements of the ‘new urban tourism’.
GIS is especially suitable for location decision making. However, this thesis has revealed that companies within the retail sector often not make use of GIS when searching for a new establishment. From the interviews it emerged that the work processes of the retail companies can be divided into three groups: ‘Use GIS’, ‘Non-use GIS’ and ‘External GIS’. It appeared that, when the awareness about GIS in the company was high, the companies were using GIS already. The five most important factors for the use or non-use of GIS appeared to be: lack of knowledge, the processing of subjective data, strategy of the company, an alternative method or system, and the cost-benefit ratio. It also emerged that there indeed is a gap between the contribution of GI to a company at present and the potential value. The most important conclusion emerging from this research is that insights into the possibilities of GIS is the key for enhancing the potential use of GIS. Therefore, awareness needs to increase, because awareness is a first step to getting greater insights. Furthermore, the subjective data remains of importance to every company and must always be taken into account when discussing the GIS analysis.
Ineke van Oostenbruggen
Georeferencing old aerial photographs is time consuming and thus costly. In order to unlock the potential value of Historical Aerial Imagery a method was proposed to extract intersections from the images, which can function as ground control points in Li & Briggs’ (2012) topological Point Pattern Matching algorithm. Unfortunately the use of Neural Networks proved unsuccessful as the dataset under research was too diverse.
Twitter data knows diverse use within GIS research, but the problem is that only a fraction of tweets posted by users is actually geotagged. This problem can be tackled by using geolocation inference methods (GIMs) in which tweet metadata is used to infer the location of a tweet or user indirectly. The aim of the thesis was to find out whether the usability of Twitter data can be increased using these methods. It was found that each GIM had different pro’s and con’s depending on the research scenario they are used in and an increase of usability is a matter of compromise rather than an overall increase in data usability.
Ruben van der Valk
Researched indoor localization methods to measure the occupancy of train passengers per compartment in real time. Created prototype that utilizes cameras and Wi-Fi using Python. The localization methods have been in an old train in a railway museum. The camera based localization has an average false negative error of 4-9% and an average false positive error of 1-9% and therefore seems useful to employ for measuring the occupancy in the train. The Wi-FI based localization has a lower accuracy and needs more testing in a real train environment.
To reach the European Climate goals a large number of on-shore wind turbines are required. I have created a multi-agent model that is able to simulate the interactions between the involved actors. This way the best locations are determined.
With the coming of the Environment and Planning Act (Omgevingswet), improved public participation is required. Nevertheless, pilot studies indicate that current participatory processes are insufficient. ICT-based tools, such as GIS-based applications and Web 2.0 technology, can potentially contribute to the improvement of public participation. User-friendliness and smart visualization techniques are important factors for such ICT-tools to be successful. Complex information can be made more accessible to laymen when it is visualized properly via a, for example, map-based Web 2.0 application which supports 3D visualization. However, integrating tacit knowledge with expert knowledge remains a complicated effort which requires more study.
MSc topic: Understanding abstract geo-information workflows and converting them to executable workflows using Semantic Web technologies
In order to store the methods to solve spatial problems workflows are used. These workflows are often very complicated and catered to a very specific execution environment. This strongly hinders the reuse of these workflows. Previous research has suggested creating abstract workflows which becomes execution engine independent. However, this requires the user to have expert knowledge to convert the workflow to executable again. Semantic Web technologies allow machines to assist in this process by storing information in a way that is both human and machine readable. This research has shown that this is indeed possible and the Semantic Web can help users understand abstract workflows and convert them to executable in the execution environment selected by the user. This allows workflows to be shared in a software independent and standardized way, strongly improving reuse of workflows.
The current used geoportals to communicate land use plans do not meet the needs of the intended users. According previous studies there should be more emphasis on the use of alternative user interfaces and a better presentation of the information. The added value of this research is that is performed an actual usability study on the used system. The outcomes are:
- There is a clear difference between perceived opinion about usability of the website between users with and without a dedicated geographical education.
- There is a clear difference in performance between the two respondent groups
- There is a strong correlation noticeable between opinion and performance of the two respondent groups.
This thesis investigated if privacy paradox applies to geotagging behaviour on social media. The participants displayed neutrality towards privacy threats. Therefore, there is no privacy paradox between user’s geotagging behaviour and their attitude towards location privacy. However, users are not fully aware for what purposes their data is used, or which third parties have access to their personal data. Social media companies are not transparent about the monetization of personal data. To resolve this informational asymmetry between users and social media companies, privacy awareness among users should be stimulated via guidelines and tutorials with information.
The aim of this investigation is to research the use of semantic web technologies to increase collaboration across the Circular Economy (CE). As knowledge about products and companies is needed in the CE context, material passports are being proposed to store this information. However, most of the propositions remain static and do not address how to create interconnections between information. This investigation proposes the conversion of material passports into Linked Data representations. To facilitate this, two ontologies, CEO and CAMO, are created that connect Circular Economy actors based on their activities, their location and their material in- and outputs (referents). Spatial parameters determine the feasibility of the exchange. This approach is tested with SPARQL/GeoSPARQL queries for three use case scenarios, namely fashion, buildings and food. Results indicate that actors and products can be connected, and meaningful answers can be obtained for circular collaboration patterns.
After a natural disaster, decision makers in the humanitarian sector often deal with a scarcity of information on the spatial aspects of the event’s impact. Priority Index Models learning from data of past events can help to rapidly identify aid priority areas. Three different types of statistical learning models were fitted to data of the 2015 Nepal earthquake. The most favourable model applies a random forest algorithm to predict the number of completely damaged houses per unit. It performed an R-squared score of 0.60 on an independent test data.
Lars van Hoeve
Due to recent technological advancements, large volumes of movement data are being collected about almost anything that moves. Professional football organizations also have access to these increasing data volumes. However, literature shows that the capacity to collect data has not been matched by the ability to process it in meaningful ways. The question therefore is how to transform this data into useful information on which coaches, analysts and players dare to base their decisions and communicate found results? This research contributes to this development by proposing ways to make movement data more accessible for football clubs by developing a conceptual visual interface to visually explore and analyze concerted movements.
The main objective of my thesis was to study the impact of various factors on fCover. First, the effect of spatial resolution on the fCover estimates was examined by using different devices such as smartphones and UAV providing various levels of resolution. Next, the influence of the method used for fCover estimation was studied by comparing two types of approaches – regression and classification. Lastly, the impact of field sampling was analyzed by relating the fCover obtained from varying number of samples to the fCover of the field, where these samples were collected.
Development of a downscaling model of MODIS LAI images to the Landsat spatial resolution using regression analysis between MODIS LAI and EVI data of 5 hydrologically diverse study areas, by replacing the MODIS EVI values in the estimated regression equations with Landsat EVI values and re-calculating the LAI values at the higher resolution level. It was found that a high percentage of the LAI variability could be explained by linear regression equations with EVI, indicating that the regression analysis equations estimated per date could be used to predict LAI at a higher resolution level with a high degree of confidence.