The arrival of computers in companies has been a blessing, but computerization 1.0 has created many isolated systems (silos). The result is that the data remained compartmentalized and locked between the various business areas. The arrival of databases in information technology has changed the corporate climate, inaugurating a data management capable of illuminating organizations with information. Databases are handy for managing data and simplifying the search for specific information.
Flat File database (1960)Before the advent of the relational model, in the 1960s, the mainframe world used the hierarchical or network data model. In the relational database, the data is divided into appropriate tables by topics, and then these topics are divided into categories (fields). This division makes the DB considerably more efficient than a data archive created, for example, through a flat-file system of an operating system.
Navigational DB (1970)
Some claim that the 1970s modular network data model is similar to the current graph model, but this is not the case. This is because, in the 1970s, there was no declarative query language for the network model. At the time, developers needed to know how data was physically stored and create programs to access the information.
Relational Database (1980)
It was only in 1980 that the relational model that made it possible to reach a level of data independence was established. It allowed users to access information without knowing the physical structure. Several years later, Edgar F. Codd (IBM researcher) proposed a new approach to building databases.
Thus we arrive at the relational model for shared databases ( A Relational Model of Data for Large Shared Data Banks ). SQL (Structured Query Language) comes from the IBM labs, which is established as the standard query language of relational databases. Relational DBMSs are centered around the atypically transactional model (On-Line Transaction Process – OLTP).
Distributed Database (1990)
With the explosion of the web economy, W3C, an international non-governmental organization for the development of the potential of the Web, promotes proposals for the web data standard. In this decade, multidimensional databases are affirmed to compensate for the poor performance offered by relational databases in analysis processes (OLAP). The latter allows you to perform more effective analyzes on vast amounts of data.
In fact, at the end of the 1990s, every commercial relational database contained a multidimensional engine within it. It is this development chapter that ushered in object-oriented database programming. The late ’90s is also the era of NoSQL (an acronym for Not Only SQL). The term, coined by Carlo Strozzi in 1998, initiates a movement that promotes the development of an open-source relational database that does not use an SQL interface.
The author explained, ” NoSQL radically departs from the relational model, and therefore should be more appropriately called NoREL, or something similar. ” The sense of NoSQL was an opening to different use cases for which the relational model represents a stretch. NoSQL databases are specially built for specific data models. These have flexible schemes for building modern applications, using multiple data models, including document, graph, key-value, etc.
Post Relational DBMS (2000)
The programming of object-oriented DBMSs characterizes the beginning of the second millennium. ODBMS (Object DBMS) allows users to store a series of objects within databases, each containing others. Today, many DBMSs hybridize the relational and entity models in reality. We, therefore, speak of ORDBMS (Object Relational DBMS). 2000 is also the founding year of Neo Technologies, a company that starts developing its graph database, Neo4j.
Programmers also create the declarative query language for the chart model, Cypher. This borrows some development concepts from SPARQL (SPARQL Protocol and RDF Query Language), such as chart pattern matching. The era of the semantic Web opens up the universe of databases. The possibility of extracting information from knowledge bases distributed on the Web opens up an almost infinite application horizon.
The RDF (Resource Description Framework) description framework is standardized by the Data Access Working Group (working group of the W3C consortium). The same working group made it an official recommendation on January 15, 2008. This framework describes the concepts and relationships on them by introducing triples (subject-predicate-object). It allows the construction of queries based on triple patterns, logical conjunctions, logical disjunctions, and optional patterns.
In-Memory Database (2010)
To support real-time data analysis, so-called in-memory databases have been developed in recent years. These allow overcoming some limitations of traditional databases in the real-time analysis of large amounts of data. An in-memory database (IMDB) stores data collections directly in the central memory of computers providing analysis results. The main advantage of in-memory databases is the significantly higher access speed (thanks to central memory). The second is the easy processing of structured and unstructured data from any source.
The main disadvantage is that this type of DB only allows short-term archiving of data. In the event of a crash, all volatile data is lost. Various systems have been devised to overcome this problem. These include database snapshots, transaction log backups, replication, non-volatile RAM. The second disadvantage is the high RAM commitment which reduces its availability. This is why they are often used on different computers connected with a grid computing approach.
Also Read: WHAT ARE HTML, HTML ELEMENTS?