Introduces data mining techniques

Data mining is the analysis of large observational data sets to find

unsuspected relationships and to summarize the data in n.

Data mining techniques can yield the benefits of automation on existing software and hardware platforms, and can be implemented on new systems as existing platforms are upgraded and new products developed. When data mining tools are implemented on high performance parallel processing systems, they can analyze massive databases in minutes. Faster processing means that users can automatically experiment with more models to understand complex data. High speed makes it practical for users to analyze huge quantities of data. Larger databases, in turn, yield improved predictions.

Data mining techniques are the result of a long process of research and product development. This evolution began when business data was first stored on computers, continued with improvements in data access, and more recently, generated technologies that allow users to navigate through their data in real time. Data mining takes this evolutionary process beyond retrospective data access and navigation to prospective and proactive information delivery. Data mining is ready for application in the business community because it is supported by three technologies that are now sufficiently mature:

a) Massive data collection

b) Powerful multiprocessor computers

c) Data mining algorithms

 

 

Explain Normalization concept of MySQL.

Database normalization is the process of organizing the fields and tables of a relational database to minimize redundancy and dependency. Normalization usually involves dividing large tables into smaller tables and defining relationships between them. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.

Explain Normalization concept.

The normalization process involves getting our data to conform to

three progressive normal forms, and a higher level of normalization

cannot be achieved until the previous levels have been achieved.

First normal form (1NF) is a property of a relation in a relational database. A relation is in first normal form if the domain of each attribute contains only atomic values, and the value of each attribute contains only a single value from that domain.

Database normalization is the process of representing a database in terms of relations in standard normal forms, where first normal is a minimal requirement.

S econd normal form (2NF) A table that is in first normal form (1NF) must meet additional criteria if it is to qualify for second normal form. Specifically: a table is in 2NF if and only if it is in 1NF and no non prime attribute is dependent on any proper subset of any candidate key of the table. A non prime attribute of a table is an attribute that is not a part of any candidate key of the table.

The third normal form (3NF) is a normal form used in database normalization.

Third normal form (3NF) is the third step in normalizing a database and it builds on the first and second normal forms, 1NF and 2NF.

 

 


Понравилась статья? Добавь ее в закладку (CTRL+D) и не забудь поделиться с друзьями:  



double arrow
Сейчас читают про: