Extractive Text Summarization Using Deep Learning for Tigrigna Language

: With the ever-increasing amounts of textual material such as web pages, news articles, blogs, microblogs


Background
Text summarization is a way used to reduce the original text data into smaller ones without losing its meaning, and eventually, saves the readers time [2].With ever-increasing amounts of textual data available in the digital space, such as news articles, blogs, microblogs, and similar with havening less and less time needs to have summarized text data.Now adays, the Internet became the massive body of unstructured information [1].The Automatic text summarization was assisting us with acquiring relevant information within less time from the available more and more unstructured data/text accessible on the web.For such kind of issues, Natural language Processing (NLP) assumes an essential part in arising an emerging text summarization dependent on the idea of various explicit languages.
There are various classifications of Text Summarization (TS) techniques categorized by various researchers.Those classifications were, based on the information input type as single or multi-document, based on the purpose general or domain-specific, and based on the output type also categorized as extractive or abstractive text summarization [2].
Among these classifications, an extractive summarization method is concatenating important sentences or paragraphs without understanding the meaning of those sentences to produce the subset of the original text document.In the case of an abstractive text summarizing approach, the system must comprehend the meaning of the original text document in order to generate a paraphrased text document with different phrases or sentences but the same meaning to the original document.
In this study, we use an extractive text summarization for unsupervised single document using the Deep Belief Networks (DBNs) composed of stacked layers of Restricted Boltzmann Machines (RBMs) for the Tigrigna language.Because the selected language was morphologically rich.The same word can have multiple meanings and difficult for abstractive text summarization.The Tigrigna language was a Semitic language spoken in the Tigray Region of Northern Ethiopia and Eritrea, according to Abraham Negash [3].

Problem Statement
With ever-increasing amounts of textual material there is more and more data available in web pages, news articles, blogs, microblogs, and other source of information and have less and less time to get the important information need to have summered text documents.The proposed solution is produced condensed summary, or producing subset of the original documents.

Objectives 1.3.1. General Objectives
The main objective of the study was to build a text summarizer for Tigrigna news articles by identifying the most important information from the given text and present it to the end users using deep learning neural networks.

Specific Objectives
Specific objectives of the proposal are listed as follow: 1) Reviewing and analyze automatic text summarization methods.2) Designing and developing an extractive text summarizer for the Tigrigna news article.3) To produce a condensed summary of news articles.4) To evaluate the performance of the Tigrigna text summarizer.5) To report the finding of the study for the upcoming research area.

Literature Review
In this literature reviews cover the overview the text summarization, type of text summarization, approaches of the text summarization, and the evaluation methods of the text summarization was review in detailed.
Text summarization is the process of making large documents into smaller ones without losing the context, which eventually saves readers time [4].Automatic Text Summarization is a growing field of study in NLP and becoming a popular/hot research area due to the growth of data and the need to process it more efficiently in the last few years [5].Automatic text summarization is part of machine learning, natural language processing (NLP), data mining and becoming a popular research area while data grow and there is a demand to process it more efficiently [5].
Generating a summary requires considerable cognitive effort from the summarizer (either a human being or an artificial system): different fragments of a text must be selected, reformulated, and assembled according to their relevance.The coherence of the information included in the summary must also be taken into account [6].Natural language processing (NLP) plays an important role in developing an automatic text summarization based on the nature of different specific languages [4].This can be done using different techniques like TextRank using a graph-based ranking algorithm, Feature-based text summarization, LexRank using TF-IDF with a graph-based algorithm, Topicbased, using sentence embeddings, and for deep learning techniques using word2vec and Encoder-Decoder Model.
The first applications in history of text summarization were library catalogs in 1674 and later generating abstracts for research articles in 1898 [7].At first, the emphasis was on generating summaries that would help choose the best articles for deeper reading rather than generating summaries that would replace the original text.
The first summarization system was built on the first commercial computer, IBM 701, by Luhn in the 1950s and it was based on bag of words technique and counting word frequencies.He extracted frequently occurring words and then gave each sentence a number based on how frequent words the sentence has.The number presented the significance of the sentence.Then the abstract was formed of the most significant sentences.[8] A decade later Edmundson (1969) [9] introduced new statistical methods on automatic extraction based on Cue, Key, Title, and Location methods.The Cue method aims to have a corpus of words whose appearance in a sentence would make the sentence either important, unimportant or irrelevant.The Key method selects the words that appear in the original text more frequently than in the whole corpus being the start for the TF-IDF (term frequency-inverse document frequency) method, the Title method takes into account the title and the headings, and the Location method is the position of the sentences: sentences under headings of first and last sentences of paragraphs and the document are usually more relevant than other sentences.He also emphasized that semantic and syntactic features of the text should be taken into account in the future development of summarizers, e.g., the length of the summary could be determined automatically, Edmundson set it to 25% of the sentences in the original.
Little by little linguistics was taken into account and systems started to handle different word forms with the techniques of NLP.The focus was on extracting, categorizing, and classifying text.Between 1990 and 2000 machine learning was introduced in NLP to parse sentences into tokens and stemming words into their base forms [7].
So far, the research focused solely on words, and computers were not able to understand the semantics of a text document.Text analytics was anyway evolving rapidly in the next phase intending to move to understand the meaning of the text occurs.researchers are still trying to build systems that can gently understand the semantics and pass reading comprehension tests.
Based on [10][11][12][13], RBM's have traditionally been used in computer vision tasks.However, these recent works have shown that they can be very effective in Natural Language Processing (NLP) tasks as well.The specific model they implemented was a regression process for sentence ranking.The architecture of this method consists of a convolution layer followed by a max-pooling layer, on top of a pretrained word2vec mapping.Leo Laugier et al [10] implemented the proposed method and perform experiments on single document extractive summarization.They show that RBM can achieve superior performance than state-ofthe-art systems.They use Python as a tool because python has versatility, the capability of fast production, and it has great support from a deep learning framework [10].They also use the evaluation metrics of ROUGE-1 and ROUGE-2 to compare the results for the dataset of Document Understanding Conferences (DUC) from 2001 to 2004.Rouge accesses the quality of an automatic summary by counting the overlapping units, such as n-gram, common word pairs, and longest common sub-sequence between automatic summary and a set of reference summaries [12].
The development of text summarization was developed ever increased through the availability of more and more unstructured data and the need to process an efficient way of text analysis.Text summarization was developed from the stoical and machine learning-based summarization to the NLP familiar methods like using deep learning approaches.
Our main target was to analyze, how to produce extractive text summarization using the Deep belief networks stakes over Restricted Boltzmann Machine, which was, one of the algorithms of deep learning for the local languages like Tigrigna.Additionally, we make available the source for others as basement like the dataset we prepare.

Related Works
The related works in this paper covers the summarization technique, document size, summary type, and approach for Tigrigna langue summarized on the following table.

Research Methodology
In these sections, the proposed research design and methodology for the selections of research type, approach, data preprocessing and production of the summary was explained in details.

Research Design
In this paper, we used Design Science research methodology to apply extractive text summarization for Tigrigna news article.The design science research involves the construction and evaluation of Information Technology artifacts, constructs, models, methods, and instantiations [14].Design science could be a problem-solving paradigm that looks to enhance human knowledge through the creation of innovative artifacts.DSR seeks to enhance technology and science knowledge bases through the creation of innovative artifacts that solve problems and improve in the environment in which they are instantiated [15].The results of DSR incorporate both the recently designed artifacts and design knowledge (DK) that gives a fuller understanding by means of design theories of why the artifacts enhance the significant application contexts [14,15].In this case, Design Science research method was used to design and construct extractive text summarization using the architecture designed below in Figure 1.
In this study, we follow all the stages of design science process that includes, the problem identification and motivation, solution objectives, design and development, evaluation, and communication.

Data Set
In this paper, we prepare the dataset from the available news article for Tigrigna language in Voice of America (VOA) Tigrigna, Fana Broad Casting (FBC) Tigrigna, Dmtsi Woyane Tigray and BBC Tigrigna news.For the exploratory purpose, each of those articles was free from tables and figures within the report sources.The taking after tables appears the details of our information set.

Architectures
The architectures of the selected model show all the stages of extractive text summarization.The primary goal of this system is used to select the most frequent words and which sentence should be included in the summary.Figure 1 shows the architecture of our system that contains three phases.
(1) preprocessing: This module consists of four components: text segmentation, tokenization, stop word elimination, stemming, and normalization, and their purpose is to efficiently represent the input text in a suitable format for the subsequent text summarization feature extraction process while maintaining the consistency of the summary.
( (3) Feature Enhancement: Following the extraction of those nine features, the features are augmented using RBM depending on their scores.We combine and convert the present features in the datasets into a smaller collection of features that we can utilize for summarization, clustering, classification, and other tasks when we undertake feature extraction.This was done to reduce overfitting and get better outcomes in less time.Each phrase comprises 9 feature vector values, which were used to construct the sentencefeature matrix.The feature vectors are then enhanced and abstracted, allowing complex features to be built out of simple ones.The sentence-feature matrix is fed into a Restricted Boltzmann Machine (RBM) with one hidden layer and one visible layer to improve those features.This step enhances the summary's quality.
(4) Summary generation: Using the selected deep learning model, this module is responsible for determining the best candidate sentences for the summary.

The Proposed Approach
This paper goes through unsupervised extractive text summarization by utilizing deep learning approaches (i.e., Restricted Boltzmann Machine (RBM)) for single-document summarization.This was applied in to three phases of feature extraction, feature enhancement and summary generation based on scored values of those features [16][17][18].Those phase work together for the purpose of integrating the main information combined in each phase and generate a summary.
Based on this, the sentences that contain the thematic words are scored using the sentence-feature matrix.The primary goal of this system is selecting the most frequent words and which sentence should be included in the summary.Figure 1 shows the architecture of our system that contains three phases.(1) pre-processing: this module consists of four components: text segmentation, tokenization, normalization, stop word removal, and stemming, and their purpose is to efficiently represent the input text in a suitable format for the subsequent text summarization process while maintaining the consistency of the summary.( 2 4) Summary generation: this module is in charge of choosing the best candidate sentences for the summary using the selected deep learning model and fuzzy logic section.

Feature Extraction
The text is built into a sentence-feature matrix, once the ambiguity has been minimized and ambiguities have been eliminated.The sentences-feature vector is created for every sentence in the text.The framework is made up of these feature vectors.We've tried out a few different features.The sentence features like the number of the thematic words, sentences position, sentences length, sentences position relative to paragraph, numbers of proper nouns, number of numerals, number of named entities ant the Term Frequency-Invers Sentences Frequency (TF ISF) have proven to be the most effective at summarizing accurate studies [17].These calculations are carried out on the text that has been obtained following the preprocessing phase: I. Number of thematic words: The thematic words were taken from the upper10 most frequently occurring words of the sentences.For each sentence, the proportion of numbers of thematic words to total words was determined.Topics are themselves noun phrases, which we distinguish and extract dependent on part of speech patterns.At that point we score the relevance of these expected subjects through an interaction called lexical chaining.Lexical chaining is a low-level text analytics measure that interface sentences by means of related nouns.
Whenever we've scored the lexical chains, themes that have a place with the highest scoring chains are appointed the highest relevancy scores.We are ready to see that those themes work effectively in passing notifiable information on the context of the article.What's more, scoring these Themes dependent on their context-oriented significance helps us see what's truly significant.themes scores are especially helpful in contrasting numerous articles across time with recognize patterns and trends.The significance of Theme Extraction and Scoring is to Limits to phrases that coordinate certain part-of-speech patterns, score based on contextual pertinence and importance, and Includes sentiment scores for themes.
II. Sentence position: This feature is calculated the first function and returns 1 if the position of the sentence is within a given first or last sentence of the text and cos ((SenPos -min) ((1/max)-min)) otherwise; calculates as follows.
where, SenPos = position of sentence in the text min = th x N max = th x 2 x N N is total number of sentences in document th is threshold calculated as 0.2 x N By this, we get a high feature value towards the beginning and ending of the document, and a progressively decremented value towards the middle.
III. Sentence length: This element is utilized to avoid sentences that are too short as those sentences won't pass on much information.The first function is 0, if the number of words is less 3 and numbers of words in the sentences, otherwise.
IV. Sentence position relative to paragraph: This comes straightforwardly from the perception that toward the beginning of each paragraph, new discussion is started and toward the end of each paragraph, we have a conclusive closing.The function is 1, if it is the first or last sentences of a paragraph and 0 otherwise.
IX. Sentence to Centroid similarity: Sentence having the highest frequency of TF-ISF score is considered as the centroid sentence.At that point, we calculate cosine similarity of each sentence with that centroid sentence.
Sentence Similarity = cosine sim (sentence; centroid) (7) At the end of this phase, we have a sentence-feature matrix.

Feature Enhancement Using RBM
The sentence feature matrix has been generated with every sentence having nine features (Number of thematic words, Sentence position, Sentence length, Sentence length, Number of proper nouns, Number of numerals, Number of named entities, erm Frequency-Inverse Sentence Frequency (TF-ISF), Sentence to Centroid similarity) function vector values.After this, recalculation is carried out in this matrix to enhance and abstract the feature vectors, to construct complicated functions out of easy ones.This step improves the quality of the summary.To enhance and abstract, the sentence feature matrix is given as input to a Restricted Boltzmann Machine (RBM) which has one hidden layer and one visible layer.A single hidden layer was enough for the learning process on the dimensions of the training data.The RBM that we're using has nine perceptron in every layer with a learning rate of 0.1.We use Persistent Contrastive Divergence approach to sample throughout the learning process.
We have trained the RBM for five epochs with a batch length of four and four parallel Gibbs Chains, used for sampling the using Persistent Contrastive Divergence (CD) approach.Every sentence feature vector is going through the hidden layer where in feature vector values for each sentence are multiplied by learned weights and a bias value is included to all of the feature vector values which is also learned by the RBM.At the end, we have a refined and improved matrix.Note that the RBM was trained for every new document that needs to be summarized.The concept was, that no document can be summarized without going over it.Since every file is particular inside the capabilities extracted in section 3.5 above, the RBM was freshly trained for every new document.

Summary Generation
The Summary generation in RBM, the amended feature vector values are summed to generate a score against individual sentences.The sentences are then sorted according to decreasing score values.In this stored list the furthermost relevant sentences are scored the first sentences and the scored subset of sentences are chosen that forms the summary.At that point the next sentence we select, the sentence having the highest measure of common proximity similarity coefficient with the first sentence, and select strictly from the top half of the sorted list.This handle, incrementally and recursively rehashed to choose more sentences until a user-specified summary constrain is come to.At that point, the sentences are re-arranged within the order of appearance within the original document.This produces a coherent outline instead of a set of confounded sentences.
Summary in Figure 6 shows the approach of creating the summary.The sum of all enhanced feature values for each sentence in the document is calculated and stored in a list.As a result, a value is generated for each sentence that represents its score.Sentences are ranked in decreasing order based on scores.The first sentence is always included in the summary because it is the most crucial one.Then, the top 50% of the remaining sentences are included and arranged in the first summary in accordance with their original positions in the document.

Evaluation Criteria
To evaluate quality of the system extracted summary adjacent to the human or manual extracted summary using the ROUGE (Recall Oriented Understudy for Gisting Evaluation) evaluation metrics for the recall, precession and F-score of the system summary.For the manual summary, 10 news article was prepared for testing and used as reference to evaluate summary quality to adjacent system summary.
The evaluation criteria in this case, system-generated summary and reference human summary were compared based on three basic measures of Precision, Recall and F-Measure [17].In terms of precision, we were primarily interested in determining how much of the system summary was actually useful or required.Precision is defined by equation (1) as the following.
In the context of ROUGE, recall refers to how much of the reference summary is recovered or captured by the system summary.By consider the individual words overlapped, it can be calculated as follows (2): The F1-Score is represented by the association of Recall and Precision.Accurateness of the result is evaluated by the number of true negative cases, but we focus on false positive and false negative cases instead.Thus, the F1 score is formulated by the following formula: The focused features of Tigrigna language were typically the stop words and punctuation marks used to analyze and filter the important words out of the given sentences/document.Instead of using a period (.) to divide sentences in Tigrigna, we used fourPoint/Ariba'ete Netibi (።) in the sentence segmentation, and the additional punctuation marks used in Tigrigna language was displayed in the following table.The typical stop words that mislead readers are eliminated from the offered articles, and we sort them out according to the frequency of each term in the chosen language.

Experiments
As discussed in the earlier section, the approaches we used consists of the three stages, feature extraction, feature enhancement, and summary generation, which work together to extract the core information and generate a logical & consistent, understandable summary.
The single Tigrigna news article was selected from the prepared articles randomly as input.Text preprocessing activity was done like splitting paragraph to sentences by separator of 'arat Netibi '(።).After that, sentence was tokenized to words by space delimiter and punctuations, stope words of the article for Tigrigna were removed.We discovered several features to improve the set of sentences selected for the summary and feature matrix base on the position of sentences, bi-token length, tri-token length, TF-ISF feature and Centroid Calculation, cosine similarity, thematic number, sentences length, numeric token, pronoun score as discussed in section 3.2.1.The Restricted Boltzmann Machine (RBM) deep leering algorithm was used to improve resultant accuracy without losing any important information of the above extracted feature matrix.The output of the final summary for the given sample input news article was generated as follow.
Sample input news article  Reference/ Manual Summary

Summary Evaluation Measures
For experimentation and evaluation, a number of factual reports from selected news articles with variable numbers of sentences were used.On each of those, the proposed algorithm was applied, and the ROUGE evaluation metric was used to evaluate the precession, recall and f-score of the system-generated over the reference/manual summary.
In this point, the Feature Extraction and feature Enhancement are carried out as explained in sections 3.2.1 and 3.2.2 for the given News article.For every sentence of the given document, the values of the feature vector sum and enhanced feature vector sum have been generating the final summary as displayed in Figure 8.This was extracted the hierarchical representation of the data to enhance the features, that, at first did not have much variety, and subsequently finding the latent factors using algorithm Restricted Boltzmann Machine (RBM).The sentences have at that point been ranked on the basis of the final feature vector sum and summaries are created as proposed in section 3.6.
Then after.the final summary was evaluated using ROUGE evaluation metrics.The ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation.It is essentially a set of metrics for evaluating automatic summarization.It works by comparing an automatically produced system summary against a set of reference summaries (typically human-produced).Let's say that we have the following system and reference summaries: The ROUGE evaluation for the above sample summary was displayed as below Figure 10 and the finall summary evaluation conducted in for different documants/ articles was summarise in following Table 4.

Results and Discussion
Most of the related work explained in table 1 focus on the single or two statistical approaches of machine learning algorithms like term frequency, Probabilistic Latent Semantic Analysis (PLSA), term frequency, TopicLSA, sentence position, and Sentence Rank (SR).In this paper, we used combinations of machine learning approaches for the features extractions using the number of the thematic words to find the top most frequent words, sentences position, sentences length, sentences position relative to paragraph, numbers of proper nouns, number of numerals, number of named entities and the Term Frequency-Invers Sentences Frequency (TF ISF) and the deep leering algorithms for feature enhancement using Restricted Boltzmann Machin (RBM) was used to extract and generate the summarization.
The Restricted Boltzmann Machine (RBM) architecture of deep neural network rather than the statistical approach as explained in section 3.2.2, it uses the two layered structures called visible and hidden layers to learn and extract feature of the give machine learning approaches.The RBM was work with unsupervised and extractive single document summarization achieves satisfactory performance compared to the related works for the selected language.The result in Table 2 show that, ROUGE-1, ROUGE-2 and ROUGE-L scores for different news article corpus test set and results are averaged over the given articles.
The experiment shows that extract the most relevant information from the source news article, comparatively, the ROUGE-1 shows better average result of recall, precession and F-score test set summary.The average result of ROUG-1 shows 49% for recall, 39% precession, 42% for F-score and for the ROUGE-2 shows that 32% recall, 26% precession and 28% for F-score, and finally, for the ROUGE-l also shows that 39% of recall, 33% of Precession, and 35% of F-scores.As displayed in the following pictures shows the scores of ROUGE-1.
To answer the research question, Does the selected model was generated well organized and coherent summary?For testing and evaluation, a variety of factual reports from different news articles with variable numbers of sentences were employed.On each of those, the suggested model was applied, and the system-generated summaries were compared to the summaries created by humans.
The models for the Feature Extraction and Enhancement were conducted as anticipated in sections 3.2.1 and 3.2.2 for all the given documents.The values of feature vector sum and enhanced feature vector sum for each sentence.The Restricted Boltzmann Machine has extracted a hierarchical representation out of data that initially did not have much variation, hence discovering.
For the research question, to what extent the system summary was efficient compared to the manual summary?The coherence of the summary shows the precession or how much of the system summary was relevant or needed, and this shows different result for different documents (30%, 51%, 32% and 43% for Doc-1, Doc-2, Doc-3 and Doc-4 respectively).As the harmonic mean of the system's precision and recall values of ROUGE-1 was 42%, which shows the mean average of extracted summary with respect to the refence summary was coherent and well organized.For the research question, Does the extractive text summarization approach was properly identified as salient and coherence summary in the original document?This shows as the result displayed in Figure 12, the various documents have different scores.In this case, from the given documents, News Article 'Doc-2' have the higher scores in Rouge-1 of the overlaps of the system summary to the reference summary as recall 45%, Precession 52% and F-score 48%.Here, the precession has the higher score, and this shows, extractive text summarization was properly identified as salient and coherence summary with in the original document.

Conclusions
In this paper, the model RBM was used as an unsupervised learning algorithm for enhancing the accuracy of the summary.It was noted that the suggested method produces concise summaries of the given single news article without any irrelevant words.By the features such as Sentence-Centroid similarity and thematic words used in the feature extraction stage was used to improve the connectivity of the sentences.This also helps the proposed model to produce concise and clear summary.In this work, the proposed model scores based on the ROUGE evaluation shows an average of 49% recall, 39% precession and 42% F measure was obtained in Rouge-1, 32% recall, 26% precession and 28% F measure was obtained in Rouge-2 and 39% recall, 33% precession and 35% F measure was obtained in Rouge-l.
The results produced using the proposed method give better evaluation parameters in comparison with prevailing RBM method.This shows that, the evaluation score of the system summary compared to the refence summary gives higher result in Rouge-1 and the F-score or harmonic mean of precision and recall is 42% and it solves the problems of information overloading in the ever-increasing available news articles by generating the extractive summarizations.
) Feature extraction: After Text Preprocessing, the sentence features are calculated based on their respective formulas given per feature, to get the sentence score.The sentence feature contributes to choosing the sentence score and includes the Number of thematic words, Sentence position, Sentence length, Sentence position relative to paragraph, Number of proper nouns, Number of numerals, Number of named entities, Term Frequency-Inverse Sentence Frequency (TF-ISF), Sentence to Centroid similarity.The scoring of those features dealt with the term's individual score as well as the sentence that included the term.It assigns a score to words that occur in multiple sentences throughout the entire text.
) Feature Extraction: this module dealt with the term's individual feature extractions based on the score of the given 9 features of the number of thematic words, Sentence position, Sentence length, Sentence position relative to paragraph, Number of proper nouns, Number of numerals, Number of named entities, Term Frequency-Inverse Sentence Frequency (TF-ISF), Sentence to Centroid similarity.(3) Feature Enhancement: The feature vectors are then enhanced and abstracted, allowing complex features to be built out of simple ones.The sentence-feature matrix is fed into a Restricted Boltzmann Machine (RBM) with one hidden layer and one visible layer to improve those features.sentences are graded in this module based on the intersection of the most frequent terms.Sentence that contains the most frequent words should have ranked first.(

5 .
The Feature Used for Tigrigna Language Text Summarization The Tigrigna text summary features sets are different from other languages by reading and encoding Unicode files, reading and encoding stop words, punctuation marks, and morphological analyses.In this instance, the module 'codecs.open ()' was used as 'text = codecs.Open (root.filename,encoding='utf-8').read ()', to open and encode the Tigrigna news article to Unicode format.

Figure 7 .
Figure 7. Sample input News Article for Summary.

Figure 10 .
Figure 10.ROUGE-Score the extracted system summary over the reference summary.

Figure 11 .
Figure 11.ROUGE-1 Summary Evaluation corresponding to summaries of various documents.

Figure 12 .
Figure 12.Average Evaluation scores of various documents.

Table 2 .
Statistics of the Data set.
like a company, a group of people etc. are often quite important to make any sense of a factual report.VIII.Term Frequency-Inverse Sentence Frequency (TF ISF): Since we are working with a single news article, we have considered TF-ISF features in to account rather than TF-IDF.Frequency of each word in a specific sentence is multiplied by the total number of occurrences of that word in the wide range of various sentences.We calculate this product and add it over all words.
Entities can be names of individuals, associations, areas, times, amounts, financial values, rates, and more.Here, we count the total number of named entities in each sentence.Sentences having references to named entities