Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/340925
Title: | Some Aspects of Decision Tree Classifiers |
Researcher: | Panhalkar Archana Ramkisanrao |
Guide(s): | Doye Dharmpal D. |
Keywords: | Computer Science Computer Science Information Systems Engineering and Technology |
University: | Swami Ramanand Teerth Marathwada University |
Completed Date: | 2021 |
Abstract: | A large amount of data is generated by the digital revolution in a human computerized world. The challenge of automatically analyzing this digital data has increased the need for developing data mining models. Depending upon user requirements, a variety of data mining techniques are used like classification, clustering, regression, summarization, association, and anomaly detection. In the field of data mining, classification plays a very vital role in both predicting the output and discovering patterns in data. Among various classification techniques, the decision tree is found to be a simple, expressive, robust, and efficient classifier. newlineA decision tree is a flowchart like structure generating human-interpretable knowledge with high prediction accuracy. An ensemble of decision trees called a decision tree forest is more accurate and more robust to noise than a single decision tree. In literature, various decision tree and decision tree forest induction algorithms are proposed for accurate classification of data. However, there are some aspects of decision trees and decision tree forest gives us space for further improvement. When data grows larger, then a decision tree becomes large and complex. To tackle this limitation, a clustering-based preprocessing technique called Decision Tree based on Cluster Analysis Pre-processing is applied to reduce datasets. This approach finds informative instances using supervised and unsupervised clustering which optimizes decision trees and gives more prediction accuracy. Two novel methods of opting for representative instances from large datasets are proposed in this thesis. These efficient algorithms are capable to select representative instances from small, medium, and large size datasets. Along with an increase in classification accuracy, it reduces size, number of leaves, and time for training decision trees. newlineThe decision tree forest is based on the rule that creating several weak learners is better than creating a single strong learner. Some aspects of decision tree fores |
Pagination: | 139p |
URI: | http://hdl.handle.net/10603/340925 |
Appears in Departments: | Department of Computer Science and Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 485.56 kB | Adobe PDF | View/Open |
02_certificate.pdf | 244.53 kB | Adobe PDF | View/Open | |
03_abstract.pdf | 182.4 kB | Adobe PDF | View/Open | |
04_declaration.pdf | 70.74 kB | Adobe PDF | View/Open | |
05_acknowledgement.pdf | 179.16 kB | Adobe PDF | View/Open | |
06_contents.pdf | 46.98 kB | Adobe PDF | View/Open | |
07_list_of_tables.pdf | 186.66 kB | Adobe PDF | View/Open | |
08_list_of_figures.pdf | 187 kB | Adobe PDF | View/Open | |
09_abbreviations.pdf | 263.21 kB | Adobe PDF | View/Open | |
10_chapter 1.pdf | 143.31 kB | Adobe PDF | View/Open | |
11_chapter 2.pdf | 637.26 kB | Adobe PDF | View/Open | |
12_chapter 3.pdf | 939.31 kB | Adobe PDF | View/Open | |
13_chapter 4.pdf | 1.15 MB | Adobe PDF | View/Open | |
14_chapter 5.pdf | 951.61 kB | Adobe PDF | View/Open | |
15_chapter 6.pdf | 740.77 kB | Adobe PDF | View/Open | |
16_conclusions.pdf | 29.93 kB | Adobe PDF | View/Open | |
17_bibliography.pdf | 305.51 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 511.63 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: