This is the continuation of the previous article. For better understanding it is recommended to look through the earlier blog posts.
System architecture elaboration
Data consolidation tools require certain conditions to be met by the spreadsheets (in case of relational data) and files whose information needs to be consolidated. Firstly, each of the large worksheets has to share the same information range while analyzed in both of its axis which is important for an editor. This requirement is going to aid the data consolidation tool (in most cases a ready software solution) to implement complicated calculation; these operations are meant to define the way each data cell corresponds with the information concerning the other worksheets (if we are talking about tables) or pages (in case of files). Once the process is terminated, the software used to implement the operations listed above, creates a separate and independent worksheet that is created to to set up a summary of all the information obtained from the respective worksheets.
In common case, data consolidation is applied in a wide range of fields for a more efficient organization of certain processes. Moreover, data consolidation as a strong optimization process can bring considerable improvements to certain tasks requiring large and non-uniform datasets. As it has been mentioned above data consolidation, consists of two technologies: Extract, Transform and Load (ETL) and Extract, Load and Transform (ELT). In order to provide sufficiently efficient data consolidation, the storages used in the process, such as databases, information stores, and associated software applications are required to allow and enable consolidation of data obtained from multiple applications. The key issue is to make the extracted data readily available and usable, with an ability to provide a detailed reporting and analysis.
Such essential environmental parameters, as Earth's surface, atmosphere, and subsurface may be examined and verified with high precision level by feeding satellite data into a system. GIS technology gives researchers and developers the opportunity to explore and investigate the variations in Earth processes over certain periods of time, like days, months, and years. For instance, the changes in vegetation vigor through a growing season can be visualized and animated to determine when drought was most extensive in a particular region. The resulting graphic represents a rough measure of plant health. Working with two variables over time would then allow researchers and developers to identify different patterns, like regional differences in the lag between a decline in rainfall and its effect on vegetation. As we can see, in this case the systems of such a category may be useful in analytical work.
Moreover, such technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output data used to represent vegetation data is implemented for instance by the Advanced Very High Resolution Radiometer (AVHRR). This sensor system identifies the total volume of energy reflected from the Earth's surface across various bands of the spectrum for measurement sites of about 1 square kilometer. The sensor, nested in the satellite, produces images of a particular location on the Earth certain number of time per day (mainly two times).
The chosen scheme of data consolidation on the database layer is shown below.
This scheme may be implemented with the help of ready market solutions as well as open source ones and, consequently, is relatively fast and easy. This is a main reason why this solution is the most suitable for creating a prototype.
Modern GIS technologies use digital data, for which various digitized information creation methods are used (for example data gathering on the open source storages). One of the most common methods applied in data creation is digitization, where a hard copy map or survey plan is transferred into a digital medium through the use of special software like CAD program, and geo-referencing capabilities. With the wide availability of ortho-rectified imagery (from satellites, aircraft, Helikites and UAVs), a technique called heads-up digitizing is becoming the main direction through which geographic data is extracted.
Heads-up digitizing involves the tracing of geographic spatial data directly on top of the aerial imagery. Unlike this technique, the traditional method, or so-called heads-down digitizing method, supposes tracing the geographic form on a separate digitizing tablet. GIS uses spatio-temporal location, in which such parameters as space and time are considered, as the key index variable for all other data.
Exactly like a relational database containing textual or numeric information can relate many different tables using common key index variables, GIS can relate otherwise unrelated data by using geolocation as the key index variable. The key in this case is the location of an object with a combination of extent in space and time.
At present the software part of the project is presented as a single web application providing the following features:
Working with layers
Getting information from weather stations
The web application is constructed on the traditional Model View Controller architecture (MVC) on the basis of PlayFramework. The general working principle is presented on the diagram
Data analysis and visualization tools are to be implemented with the help of interpretable programming languages as Groovy for data analysis and JavaScript in combination with Python for data visualization
The choice of Groovy is caused by the low entrance level, native Java based library support and easy implementation if Java environment that is PlayFramework. Moreover, the language possess all the powerful features of Java but with simplified syntax.
Javascript is considered as a standard language in the field of visualization, especially of spatial data. At present there ares a great deal of libraries written on Javascript and aimed to provide powerful tools of spatial data visualization, as: Leaflet.js, D3.js, PolyMaps.jsи etc..
The common application prototype architecture is shown on the diagram
As may be seen from the diagram the application is a typical MVC(model-view-controller) representative. Functional features of the application common for geographic information systems are provided by such open source vendors as OpenStreetMap (other providers may a0lso be applied). The data visualization is provided by the Leaflet.js library. The plugin developed during the project uses leaflet library to import maps created in QGIS to the web portal.
The chapter outlies the common visualization problems in the sight of view of spatially distributed data. The architecture of the whole system has been elaborated and presented above in the chapter.
The data visualization may be implemented by means of existing multifunctional GIS overviewed and briefly described in the chapter. Besides using existing GIS for map creation it is also efficient to use program libraries, including open source ones. Ready solution sets are commonly available as web applications. These applications use server side for storing and processing user data. This data may later be used to create different type visualizations with the help of Javascript libraries and preprocessing on the server side. At present there is no unified classification of tasks concerning geographically/spatially distributed data visualization. But still it is possible to distinguish the most typical task classes as, for instance, classification, clusterization or heat maps. These classes may not be considered and treated as solid terms and have a wide interpretation. In practice, while solving concrete problems the most adequate and useful in application methods and tools of visualization are commonly chosen.
At present there are several program libraries that allow creating own visualization tools in different formats. In most cases, these libraries are provided as client ones written on Javascript. The choice of programming language is caused by the wide distribution of the language itself as well as the availability of well documented and popular libraries written on this language. It should also be widely supported by the programmers’ community and large software companies. The powerful tools of existing Cascading Style Sheets languages (CSS) allow creating flexible tools for constructing different type visualizations.
All the approaches described above are fully or partially implemented in the websites dedicated to the renewable energy sources monitoring that are briefly described above. The provided overview allows making certain conclusions on the functional features of the websites as well as on the methods of data gathering and representation. It has also been mentioned above that there are several scientific schools conducting researches on renewable energy usage and monitoring. The largest ones are situated in USA, Europe, particularly in Denmark. There are also several schools in Russian federation. A certain amount researches and practical works have been conducted in Kazakhstan too.
A system prototype has been developed, that should provide the following features: choice of a technological platform of energy production and its economic efficiency evaluation; risks evaluation; evaluation of ecological consequences of transition to renewable energy; evaluation of opportunities of transition to intellectual distribution grids etc. GIS key layers are: Geoinformational; Energy sources; Energy consumers; Energy distribution and transition system; Resources; Energy production technologies; Energy storage technologies; Ecologic situation and potential threats; Economic evaluation; Juridical evaluation; Data protection.
The project consists of two parts: hardware and software.
The software part, which is an application, is a pure MVC (model-view-controller) representative. The functional features are provided by existing map services.
To be continued.
コメント