Extremely Serious

Category: Database (Page 1 of 2)

Understanding the Fundamental Categories of Enterprise Data

In the world of data management, enterprises deal with diverse types of information crucial for their operations. Three fundamental categories play a pivotal role in organizing and utilizing this wealth of data: Master Data, Transaction Data, and Reference Data.

Master Data

Master data represents the core business entities that are shared across an organization. Examples include:

  • Customer Information:
  • Product Data:
    • Product Name: XYZ Widget
    • SKU (Stock Keeping Unit): 123456
    • Description: High-performance widget for various applications.
  • Employee Records:
    • Employee ID: 789012
    • Name: Jane Smith
    • Position: Senior Software Engineer

Master data serves as a foundational element, providing a consistent and accurate view of key entities, fostering effective decision-making and streamlined business processes.

Transaction Data

Transaction data captures the day-to-day operations of an organization. Examples include:

  • Sales Orders:
    • Order ID: SO-789
    • Date: 2023-11-20
    • Product: XYZ Widget
    • Quantity: 100 units
  • Invoices:
    • Invoice Number: INV-456
    • Date: 2023-11-15
    • Customer: John Doe
    • Total Amount: $10,000
  • Payment Records:
    • Payment ID: PAY-123
    • Date: 2023-11-25
    • Customer: Jane Smith
    • Amount: $1,500

Transaction data is dynamic, changing with each business activity, and is crucial for real-time monitoring and analysis of operational performance.

Reference Data

Reference data is static information used to categorize other data. Examples include:

  • Country Codes:
    • USA: United States
    • CAN: Canada
    • UK: United Kingdom
  • Product Classifications:
    • Category A: Electronics
    • Category B: Apparel
    • Category C: Home Goods
  • Business Units:
    • BU-001: Sales and Marketing
    • BU-002: Research and Development
    • BU-003: Finance and Accounting

Reference data ensures consistency in data interpretation across the organization, facilitating interoperability and accurate reporting.

Beyond the Basics

While Master Data, Transaction Data, and Reference Data form the bedrock of enterprise data management, the landscape can be more nuanced. Additional types of data may include:

  • Metadata:
    • Data Type: Text
    • Field Length: 50 characters
    • Last Modified: 2023-11-20
  • Historical Data:
    • Past Sales Transactions
    • 2023-11-19: 80 units sold
    • 2023-11-18: 120 units sold
  • Analytical Data:
    • Business Intelligence Dashboard
    • Key Performance Indicators (KPIs) for the last quarter
    • Trends in customer purchasing behavior

Understanding the intricacies of these data categories empowers organizations to implement robust data management strategies, fostering efficiency, accuracy, and agility in an increasingly data-driven world.

In conclusion, mastering the distinctions between Master Data, Transaction Data, and Reference Data is essential for organizations aiming to harness the full potential of their information assets. By strategically managing these categories, businesses can lay the foundation for informed decision-making, operational excellence, and sustained growth.

Understanding Database Normalization

Database normalization is a critical aspect of relational database design, aimed at improving data integrity and organization by minimizing redundancy. The normalization process involves systematically organizing data to avoid certain types of anomalies that can occur during database operations. In this basic guide, we will explore the main normal forms - First Normal Form (1NF), Second Normal Form (2NF) and Third Normal Form (3NF).

1. First Normal Form (1NF):

First Normal Form (1NF) is the foundational step in the normalization process. Its primary goal is to ensure that each column in a table contains atomic, indivisible values. Additionally, there should be no repeating groups of columns.

Understanding 1NF with an Example:

Consider a table representing students and their courses:

Full_Name Gender Courses
Juan Dela Cruz Male Math, Physics
Maria Clara Female Chemistry, Biology

In this example, the Courses column violates 1NF because it contains multiple values. To bring it into 1NF, we split the column into separate rows for each course:

Full_Name Gender Course
Juan Dela Cruz Male Math
Juan Dela Cruz Male Physics
Maria Clara Female Chemistry
Maria Clara Female Biology

Now, each cell contains an atomic value, and there are no repeating groups.

2. Second Normal Form (2NF):

Second Normal Form (2NF) builds on 1NF and aims to eliminate partial dependencies. In 2NF, all non-key attributes must be fully functionally dependent on the entire primary key.

Functional Dependency

A functional dependency exists when the value of one attribute uniquely determines the value of another attribute in the same table. In other words, if knowing the value of attribute A uniquely determines the value of attribute B, we say that B is functionally dependent on A, denoted as A → B.

Candidate Keys

In the context of normalization, a candidate key is a set of one or more columns that uniquely identifies each record in a table. These are potential choices for the primary key of a table. It's essential to identify candidate keys as they play a crucial role in determining functional dependencies.

Understanding candidate keys helps in establishing proper relationships and dependencies within the data.

Primary Key

A primary key is a unique identifier for a record in a table. It serves as a means of uniquely identifying each row or record in the table. The primary key must have two main properties:

  1. Uniqueness: Each value in the primary key column must be unique across all rows in the table. No two rows can have the same primary key value.
  2. Non-nullability: The primary key column cannot contain null (empty) values. Every record must have a valid and non-null primary key.

Commonly, primary keys are implemented using a single column, but they can also be composite keys, which involve multiple columns to ensure uniqueness. Primary keys are critical for establishing relationships between tables, facilitating data retrieval, and maintaining data integrity.

Foreign Key

A foreign key is a column or a set of columns in a table that refers to the primary key of another table. It establishes a link or relationship between two tables, enabling the creation of meaningful associations between records in different tables. The foreign key in one table typically corresponds to the primary key in another table.

Understanding 2NF with an Example:

Applying 2NF from the previous example output will result in Student and Student_Course tables. The logical split is by functional dependency, student specific data are in student table while their associated courses will be in student_course table.

Table: Student

Student_ID Full_Name Gender
1 Juan Dela Cruz Male
2 Maria Clara Female
  • Primary Key: {Student_ID}

The Student_ID was added to have primary key. This will make the function of the table obvious.

The introduction of Student_ID column is not necessary if there can be another candidate key that is unique enough to become a primary key. In this particular example, Full_Name is the candidate key that has the potential to be a primary key. But can it guarantee that no two people will going the have the same name. Hence the introduction of Student_ID makes sense in this context.

The functional dependency is as follows:

{Student_ID}{Full_Name, Gender}: The Student_ID uniquely determines the Full_Name and Gender in the first table. For example, for Student_ID 1, the combination of Full_Name and Gender is uniquely determined as {Juan Dela Cruz, Male}.

This a functional dependency because knowing the values on the left side of the arrow uniquely determines the values on the right side.

Table: Student_Course

Student_ID Course
1 Math
1 Physics
2 Chemistry
2 Biology
  • Primary Key: {Student_ID, Course}
  • Foreign Key: {Student_ID} reference the Primary Key in Student table.

Now, each table represents a single function (i.e. one for student, and another for course data), and all non-key attributes are fully dependent on the primary key.

3. Third Normal Form (3NF):

Third Normal Form (3NF) is a crucial stage in the normalization process, building on the principles of 1NF and 2NF. The primary goal of 3NF is to eliminate transitive dependencies, ensuring that non-prime attributes do not depend on other non-prime attributes.

Transitive Dependency

  • Transitive dependency is a specific type of functional dependency that occurs when the value of one attribute determines the value of another attribute through a third attribute.
  • If A determines B (A → B) and B determines C (B → C), then A indirectly determines C through the transitive dependency (A → B → C).
  • In database normalization, transitive dependencies are generally undesirable, and the goal is to eliminate them to achieve higher normal forms.

Non-Prime Attributes

In the context of normalization, non-prime attributes are attributes that are not part of any candidate key. In other words, they are attributes that are not used to uniquely identify records in a table. Prime attributes, on the other hand, are part of a candidate key.

It's crucial to identify and handle dependencies involving non-prime attributes to achieve a well-organized and normalized database.

Understanding 3NF with an Example:

Expanding the Student_Course table from the previous example and introducing the Department column:

Student_ID Course Department
1 Algebra Mathematics
1 Physics Science
2 Chemistry Science
2 Biology Science

Candidate Keys:

  • {Student_ID, Course}
  • {Student_ID}

In this case, the data appears to have a transitive dependency, as the Department is functionally dependent on the candidate key {Student_ID, Course}.

Identifying Transitive Dependency

In the given example, the transitive dependency is represented as:

  • {Course} → Department

This dependency indicates that a non-prime attribute Department depends on the attribute Course.

Applying 3NF:

To bring this table into 3NF, we need to separate the transitive dependency into a new table (i.e. Course_Department). We create two tables: one for student-course relationships, and one for course-department relationships.

Table: Student_Course

Student_ID Course
1 Algebra
1 Physics
2 Chemistry
2 Biology

This is still the same output from 2NF after removing the transitive dependency. It indicates that the introduction of the department attribute earlier introduces a transitive dependency.

Table: Course_Department

Course Department
Algebra Mathematics
Physics Science
Chemistry Science
Biology Science
Trigonometry Mathematics
  • Primary Key: {Course}

Now, the tables are in 3NF. The transitive dependency has been eliminated by decomposing the original table into two tables. Each table represents a separate entity with clear functional dependencies. The relationships are maintained through primary and foreign keys.

Normalization helps in maintaining data integrity, reducing redundancy, and making the database more adaptable to changes. However, it's essential to strike a balance and not over-normalize, as it could lead to complex queries and performance issues in certain scenarios.

Understanding the Fundamental Categories of Enterprise Data

In the world of data management, enterprises deal with diverse types of information crucial for their operations. Three fundamental categories play a pivotal role in organizing and utilizing this wealth of data: Master Data, Transaction Data, and Reference Data.

Master Data

Master data represents the core business entities that are shared across an organization. This includes but is not limited to:

  • Customer Information: Details about customers, their profiles, and interactions.
  • Product Data: Comprehensive information about products or services offered.
  • Employee Records: Data related to employees, their roles, and responsibilities.

Master data serves as a foundational element, providing a consistent and accurate view of key entities, fostering effective decision-making and streamlined business processes.

Transaction Data

Transaction data captures the day-to-day operations of an organization. It includes records of individual business activities and interactions, such as:

  • Sales Orders: Information about customer purchases and sales transactions.
  • Invoices: Documentation of financial transactions between the business and its clients.
  • Payment Records: Details of payments made or received.

Transaction data is dynamic, changing with each business activity, and is crucial for real-time monitoring and analysis of operational performance.

Reference Data

Reference data is static information used to categorize other data. It provides a standardized framework for classifying and organizing data. Examples include:

  • Country Codes: Standardized codes for different countries.
  • Product Classifications: Codes or categories for organizing products.
  • Business Units: Classifications for different business segments.

Reference data ensures consistency in data interpretation across the organization, facilitating interoperability and accurate reporting.

Beyond the Basics

While Master Data, Transaction Data, and Reference Data form the bedrock of enterprise data management, the landscape can be more nuanced. Additional types of data may include:

  • Metadata: Information that describes the characteristics of other data, providing context and facilitating understanding.
  • Historical Data: Records of past transactions and events, essential for trend analysis and forecasting.
  • Analytical Data: Information used for business intelligence and decision support.

Understanding the intricacies of these data categories empowers organizations to implement robust data management strategies, fostering efficiency, accuracy, and agility in an increasingly data-driven world.

In conclusion, mastering the distinctions between Master Data, Transaction Data, and Reference Data is essential for organizations aiming to harness the full potential of their information assets. By strategically managing these categories, businesses can lay the foundation for informed decision-making, operational excellence, and sustained growth.

Understanding Database Cardinality Relationships

In the realm of relational databases, cardinality relationships define the connections between tables and govern how instances of one entity relate to instances of another. Let's delve into three cardinality relationships with a consistent example, illustrating each with table declarations.

1. One-to-One (1:1) Relationship

In a one-to-one relationship, each record in the first table corresponds to exactly one record in the second table, and vice versa. Consider the relationship between Students and DormRooms:

CREATE TABLE Students (
    student_id INT PRIMARY KEY,
    student_name VARCHAR(50),
    dorm_room_id INT UNIQUE,
    FOREIGN KEY (dorm_room_id) REFERENCES DormRooms(dorm_room_id)
);

CREATE TABLE DormRooms (
    dorm_room_id INT PRIMARY KEY,
    room_number INT
);

Here, each student is assigned one dorm room, and each dorm room is assigned to one student.

2. One-to-Many (1:N) Relationship

In a one-to-many relationship, each record in the first table can be associated with multiple records in the second table, but each record in the second table is associated with only one record in the first table. Consider the relationship between Departments and Professors:

CREATE TABLE Departments (
    department_id INT PRIMARY KEY,
    department_name VARCHAR(50)
);

CREATE TABLE Professors (
    professor_id INT PRIMARY KEY,
    professor_name VARCHAR(50),
    department_id INT,
    FOREIGN KEY (department_id) REFERENCES Departments(department_id)
);

In this case, each department can have multiple professors, but each professor is associated with only one department.

3. Many-to-Many (N:N) Relationship

In a many-to-many relationship, multiple records in the first table can be associated with multiple records in the second table, and vice versa. Consider the relationship between Students and Courses:

CREATE TABLE Students (
    student_id INT PRIMARY KEY,
    student_name VARCHAR(50)
);

CREATE TABLE Courses (
    course_id INT PRIMARY KEY,
    course_name VARCHAR(50)
);

CREATE TABLE StudentCourses (
    student_id INT,
    course_id INT,
    PRIMARY KEY (student_id, course_id),
    FOREIGN KEY (student_id) REFERENCES Students(student_id),
    FOREIGN KEY (course_id) REFERENCES Courses(course_id)
);

In this scenario, many students can enroll in multiple courses, and each course can have multiple students.

Understanding these cardinality relationships is essential for designing robust and efficient relational databases, ensuring the integrity and consistency of data across tables.

Common Table Expression (CTE) – With Clause

The with clause is also known as common table expression (CTE) and subquery refactory. It is a temporary named result set.

SQL:1999 added the with clause to define "statement scoped views". They are not stored in the database scheme: instead, they are only valid in the query they belong to. This makes it possible to improve the structure of a statement without polluting the global namespace.

Syntax

with <QUERY_NAME_1> (<COLUMN_1>[, <COLUMN_2>][, <COLUMN_N>]) as
     (<INNER_SELECT_STATEMENT>)
[,<QUERY_NAME_2> (<COLUMN_1>[, <COLUMN_2>][, <COLUMN_N>]) as
     (<INNER_SELECT_STATEMENT>)]
<SELECT_STATEMENT>

Non-Recursive Example

with sales_tbl as (
select sales.*
	from (VALUES
		('Spiderman',1,19750),
		('Batman',1,19746),
		('Superman',1,9227),
		('Iron Man',1,9227),
		('Wonder Woman',2,16243),
		('Kikkoman',2,17233),
		('Cat Woman',2,8308),
		('Ant Man',3,19427),
		('Aquaman',3,16369),
		('Iceman',3,9309)
	) sales (emp_name,dealer_id,sales)
)
select ROW_NUMBER() over (order by dealer_id) as rownumber, *
from sales_tbl

Recursive Example

WITH [counter] AS (

   SELECT 1 AS n  -- Executes first and only once.

   UNION ALL      -- UNION ALL must be used.

   SELECT n + 1   -- The portion that will be executed 
   FROM [counter] -- repeatedly until there's no row 
                  -- to return.

   WHERE  n < 50  -- Ensures that the query stops.
)
SELECT n FROM [counter]

SQL Window Functions

Window functions are closely related to aggregate functions except that it retains all the rows.

Categories

Aggregate Window Functions

Ranking Window Functions

Value Window Functions

Example

with sales_tbl as (
select sales.*
	from (VALUES
		('Spiderman',1,19750),
		('Batman',1,19746),
		('Superman',1,9227),
		('Iron Man',1,9227),
		('Wonder Woman',2,16243),
		('Kikkoman',2,17233),
		('Cat Woman',2,8308),
		('Ant Man',3,19427),
		('Aquaman',3,16369),
		('Iceman',3,9309)
	) sales (emp_name,dealer_id,sales)
)
select ROW_NUMBER() over (order by dealer_id) as rownumber,
	*,
	AVG(sales) over (partition by dealer_id) as [Average Sales by DealerID],
	SUM(sales) over (partition by dealer_id) as [Total Sales by DealerID],
	SUM(sales) over (partition by dealer_id order by sales rows between unbounded preceding and current row) as [Running Total by DealerID]
from sales_tbl

Value Window Functions

LAG()

Accesses data from a previous row in the same result set without the use of a self-join starting with SQL Server 2012 (11.x). LAG provides access to a row at a given physical offset that comes before the current row. Use this analytic function in a SELECT statement to compare values in the current row with values in a previous row.

LEAD()

Accesses data from a subsequent row in the same result set without the use of a self-join starting with SQL Server 2012 (11.x). LEAD provides access to a row at a given physical offset that follows the current row. Use this analytic function in a SELECT statement to compare values in the current row with values in a following row.

Common Syntax

LAG | LEAD
( expression )
OVER ( [ PARTITION BY expr_list ] [ ORDER BY order_list ] )

FIRST_VALUE()

Returns the first value in an ordered set of values in SQL Server 2019

LAST_VALUE()

Returns the last value in an ordered set of values in SQL Server 2019 (15.x).

Common Syntax

FIRST_VALUE | LAST_VALUE
( expression ) OVER
( [ PARTITION BY expr_list ] [ ORDER BY order_list ][ frame_clause ] )

Ranking Window Functions

CUME_DIST()

For SQL Server, this function calculates the cumulative distribution of a value within a group of values. In other words, CUME_DIST calculates the relative position of a specified value in a group of values. Assuming ascending ordering, the CUME_DIST of a value in row r is defined as the number of rows with values less than or equal to that value in row r, divided by the number of rows evaluated in the partition or query result set

DENSE_RANK()

This function returns the rank of each row within a result set partition, with no gaps in the ranking values. The rank of a specific row is one plus the number of distinct rank values that come before that specific row.

NTILE()

Distributes the rows in an ordered partition into a specified number of groups. The groups are numbered, starting at one. For each row, NTILE returns the number of the group to which the row belongs.

PERCENT_RANK()

Calculates the relative rank of a row within a group of rows in SQL Server 2019 (15.x). Use PERCENT_RANK to evaluate the relative standing of a value within a query result set or partition. PERCENT_RANK is similar to the CUME_DIST function.

RANK()

Returns the rank of each row within the partition of a result set. The rank of a row is one plus the number of ranks that come before the row in question.

ROW_NUMBER()

Numbers the output of a result set. More specifically, returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition.

Common Syntax

window_function () OVER clause

Aggregate Window Functions

AVG()

This function returns the average of the values in a group

COUNT()

This function returns the number of items found in a group.

MAX()

Returns the maximum value in the expression.

MIN()

Returns the minimum value in the expression

STDEV()

Returns the statistical standard deviation of all values in the specified expression.

STDEVP()

Returns the statistical standard deviation for the population for all values in the specified expression.

SUM()

Returns the sum of all the values, or only the DISTINCT values, in the expression.

VAR()

Returns the statistical variance of all values in the specified expression

VARP()

Returns the statistical variance for the population for all values in the specified expression.

Common Syntax

window_function ( [ ALL ] expression ) 
OVER ( [ PARTITION BY expr_list ] [ ORDER BY order_list frame_clause ] )

Using Embedded Derby in Java 9 with Gradle

  1. Add the following dependencies to your build.gradle file:
    compile group: 'org.apache.derby', name: 'derby', version: '10.15.1.3'
    compile group: 'org.apache.derby', name: 'derbyshared', version: '10.15.1.3'
  2. Add the following entries to your module-info.java file:
    requires org.apache.derby.engine;
    requires org.apache.derby.commons;
    requires java.sql;
    
  3. In your Java class, you can create a connection like the following:
    final String DATABASE_NAME = "sample_table";
    String connectionURL = String.format("jdbc:derby:%s;create=true", DATABASE_NAME);
    connection = DriverManager.getConnection(connectionURL);
    
  4. Do your needed database operations with the connection (e.g. create table, execute a query, or call a stored procedure.).
  5. When your done using the connection, close the connection and shutdown the Derby engine like the following:
    connection.close();
    
    boolean gotSQLExc = false;
    try {
        //shutdown all databases and the Derby engine
        DriverManager.getConnection("jdbc:derby:;shutdown=true");
    } catch (SQLException se)  {
        if ( se.getSQLState().equals("XJ015") ) {
            gotSQLExc = true;
        }
    }
    if (!gotSQLExc) {
        System.out.println("Database did not shut down normally");
    }

    A clean shutdown always throws SQL exception XJ015, which can be ignored.

« Older posts