top of page

Student Empowerment

Public·44 members

Oracle 11g Sql Chapter 4 Answers [Extra Quality]


In this chapter, we learned about the role that data and databases play in the context of information systems. Data is made up of small facts and information without context. If you give data context, then you have information. Knowledge is gained when information is consumed and used for decision making. A database is an organized collection of related information. Relational databases are the most widely used type of database, where data is structured into tables and all tables must be related to each other through unique identifiers. A database management system (DBMS) is a software application that is used to create and manage databases, and can take the form of a personal DBMS, used by one person, or an enterprise DBMS that can be used by multiple users. A data warehouse is a special form of database that takes data from other databases in an enterprise and organizes it for analysis. Data mining is the process of looking for patterns and relationships in large data sets. Many businesses use databases, data warehouses, and data-mining techniques in order to produce business intelligence and gain a competitive advantage.

In this chapter, we will focus on using SQL to create the database and table structures, mainly using SQL as a data definition language (DDL). In Chapter 16, we will use SQL as a data manipulation language (DML) to insert, delete, select and update data within the database tables.

This chapter deals with simple group operations involving theaggregate functions, the GROUP BY and HAVING clauses. Advanced groupoperations such as ROLLUP, CUBE, and GROUPING SETS are discussed inChapter 13.

As these examples show, using the Spark SQL interface to query data is similar to writing a regular SQL query to a relational database table. Although the queries are in SQL, you can feel the similarity in readability and semantics to DataFrame API operations, which you encountered in Chapter 3 and will explore further in the next chapter.

Writing or saving a DataFrame as a table or file is a common operation in Spark. To write a DataFrame you simply use the methods and arguments to the DataFrameWriter outlined earlier in this chapter, supplying the location to save the Parquet files to. For example:

Appenders are named entities. This ensures that they can be referenced by name, a quality confirmed to be instrumental in configuration scripts. The Appender interface extends the FilterAttachable interface. It follows that one or more filters can be attached to an appender instance. Filters are discussed in detail in a subsequent chapter.

The ConsoleAppender, as the name indicates, appends on the console, or more precisely on System.out or System.err, the former being the default target. ConsoleAppender formats events with the help of an encoder specified by the user. Encoders will be discussed in a subsequent chapter. Both System.out and System.err are of type Consequently, they are wrapped inside an OutputStreamWriter which buffers I/O operations.

A sample application, chapters.appenders.mail.EMailgenerates a number of log messages followed by a singleerror message. It takes two parameters. The first parameter is aninteger corresponding to the number of logging events togenerate. The second parameter is the logback configurationfile. The last logging event generated by EMailapplication, an ERROR, will trigger the transmission of an emailmessage.

Once data have been collected and linked, it is necessary to store and organize them. Many social scientists are used to working with one analytical file, often in SAS, Stata, SPSS, or R. But most organizations store (or should store) their data in databases, which makes it critical for social scientists to learn how to create, manage, and use databases for data storage and analysis. This chapter describes the concept of databases and introduces different types of databases and analysis languages (in particular, relational databases and SQL, respectively) that allow storing and organizing of data for rapid and efficient data exploration and analysis.

We turn now to the question of how to store, organize, and manage the data used in data-intensive social science. As the data with which you work grow in volume and diversity, effective data management becomes increasingly important to avoid scale and complexity from overwhelming your research processes. In particular, when you deal with data that are frequently updated, with changes made by different people, you will want to use database management systems (DBMSs) instead of maintaining data in text files or within siloed statistical packages such as SAS, SPSS, Stata, and R. Indeed, we go so far as to say: if you take away just one thing from this book (or at least from this chapter), it should be this: Use a database!

As we explain in this chapter, DBMSs greatly simplify data management. They require a little bit of effort to set up, but are worth it. They permit large amounts of data to be organized in multiple ways that allow for efficient and rapid exploration via query languages; durable and reliable storage that maintain data consistency; scaling to large data sizes; and intuitive analysis, both within the DBMS itself and via connectors to other data analysis packages and tools. DBMSs have become a critical component of most real-world systems, from handling transactions in financial systems to delivering data to power websites, dashboards, and software that we use every day. If you are using a production-level enterprise system, chances are there is a database in the back-end. They are multi-purpose and well suited for organizing social science data and for supporting data exploration and analysis.

These considerations bring us to the topic of this chapter, namely database management systems. A DBMS26 handles all of the issues listed above, and more. As we will see below when we look at concrete examples, a DBMS allows us to define a logical design that fits the structure of our data. The DBMS then creates a data model (more on this below) that allows these data to be stored, queried, and updated efficiently and reliably on disk, thus providing independence from underlying physical storage. It supports efficient access to data through query languages and (somewhat) automatic optimization of those queries to permit fast analysis. Importantly, it also support concurrent access by multiple users, which is not an option for file-based data storage. It supports transactions, meaning that any update to a database is performed in its entirety or not at all, even in the face of computer failures or multiple concurrent updates. It also reduces the time spent both by analysts, by making it easy to express complex analytical queries concisely, and on data administration, by providing simple and uniform data administration interfaces.

Hundreds of different open source, commercial, and cloud-hosted versions DBMSs are available and new ones pop up every day. However, you only need to understand a relatively small number of concepts and major database types to make sense of this diversity. Table 4.2 defines the major classes of DBMSs that we will consider in this chapter. We consider only a few of these in any detail.

We introduce each of these features in the following, although not in that order, and certainly not completely. Our goal here is to give enough information to provide the reader with insights into how relational databases work and what they do well; an in-depth SQL tutorial is beyond the scope of this book, but we highly recommend checking the references at the end of this chapter.

So far we have created a database and two empty tables. The next step is to add data to the tables. We can of course do that manually, row by row, but in most cases we will import data from another source, such as a CSV file. Listing Load data shows the two statements that load the data of Figure 4.2 into our two tables. (Here and elsewhere in this chapter, we use the MySQL DBMS. The SQL syntax used by different DBMSs differs in various, mostly minor ways.) Each statement specifies the name of the file from which data is to be read and the table into which it is to be loaded. The fields terminated by "," statement tells SQL that values are separated by columns, and ignore 1 lines tells SQL to skip the header. The list of column names is used to specify how values from the file are to be assigned to columns in the table.

The enormous popularity of DBMSs means that there are many good books to be found. Classic textbooks such as those by Silberschatz et al. (2010) and Ramakrishnan and Gherke (2002) provide a great deal of technical detail. The DB Engines website collects information on DBMSs.33 There are also many useful online tutorials, and of course StackExchange and other online forums often have answers to your technical questions.

We did not consider in this chapter the native extensible Markup Language (XML) and Resource Description Framework (RDF) triple stores, as these are not typically used for data management. However, they do play a fundamental role in metadata and knowledge management. See, for example, Sesame (Broekstra, Kampman, and Van Harmelen 2002).36 153554b96e

  • About

    Welcome to the group! You can connect with other members, ge...

    Group Page: Groups_SingleGroup
    bottom of page