2019
DOI: 10.1007/978-3-030-10674-4
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
90
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 387 publications
(90 citation statements)
references
References 0 publications
0
90
0
Order By: Relevance
“…This opens up a whole new field of research where optimization of the learning process is required to enable a comprehensive capturing of the extracted features' embedded knowledge. One competent way to tackle such problem is to use meta-heuristic FS method [23][24][25][26][27][28][29][30][31][32][33] which intelligently selects only the relevant features without loss of any valuable information. This method assumes that this reduced set of features carries significant information about the audio signal and is enough for the model to identify the different spoken languages while maintaining a high accuracy level.…”
Section: Motivation and Contributionsmentioning
confidence: 99%
“…This opens up a whole new field of research where optimization of the learning process is required to enable a comprehensive capturing of the extracted features' embedded knowledge. One competent way to tackle such problem is to use meta-heuristic FS method [23][24][25][26][27][28][29][30][31][32][33] which intelligently selects only the relevant features without loss of any valuable information. This method assumes that this reduced set of features carries significant information about the audio signal and is enough for the model to identify the different spoken languages while maintaining a high accuracy level.…”
Section: Motivation and Contributionsmentioning
confidence: 99%
“…DLN is a way of penalizing the term weights for a document in accordance with its length. DLN has been one of the central topics of interest in IR and document clustering theory and applications for many years [2,22,23]. These include cosine normalization, relative frequency, maximum term frequency, mean term frequency, probability normalization, byte length normalization, and likelihood of relevance.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Table I demonstrates the WCAG 2.0 conformance levels. Other optimization techniques can be used [7][8][9][10][11][12][13]. All SCs of level A are satisfied.…”
Section: Wcag 20mentioning
confidence: 99%