Listed in Table 1. We will describe these evaluation indicators in detail.Appl. Sci. 2021, 11,7 ofFigure five. BiLSTM framework. Table 1. Facts of evaluation metrics. “Auto” and “Human” represent automatic and human evaluations respectively. “Higher” and “Lower” imply the higher/lower the metric, the better a model performs. Metrics Composite score Success Price Word Freqency Grammaticality Fluency Naturality Evaluation Process Auto Auto Auto Auto (Error Rate) Auto (Perplexity) Human (Naturality Score) Greater Greater Larger Larger Lower Reduce Greater(1) The attack accomplishment price is defined because the percentage of samples incorrectly predicted by the target model to the total quantity of samples. Within this experiment, these samples are all connected towards the universal trigger. The formula is defined as follows S= 1 Ni =( f (t, xi ) = yi ),N(six)exactly where N may be the total number of samples, f represents the target model, t represents the universal trigger, xi represents the ith test sample, and yi represents the actual label of xi . (2) We divide it into four parts for the high quality of triggers, like word frequency [29], grammaticality, fluency, and naturality [23]. The typical frequency of your words within the Carbazeran Technical Information trigger is calculated working with empirical estimates in the training set from the target classifier.Appl. Sci. 2021, 11,8 ofThe higher the typical frequency of a word, the extra instances the word appears in the training set. Grammaticality is measured by adding triggers in the same number of words to Chlorpyrifos-oxon manufacturer benign text, and after that employing a web based grammar check tool (Grammarly) to obtain the grammatical error price from the sentence. With all the enable of GPT-2 [14], we utilize Language Model Perplexity (PPL) to measure fluency. Naturalness reflects irrespective of whether an adversarial example is organic and indistinguishable from human-written text. (three) We construct a composite score Q to comprehensively measure the overall performance of our attack process. The formula is defined as follows Q = + W – – (7)where S is the attack good results rate with the trigger, W will be the typical word frequency on the trigger, G may be the grammatical error price of the trigger, and P will be the perplexity with the GPT-2 [14]. W, G, P are all normalized. , , would be the coefficient of each and every parameter, and + + + = 1. In order to balance the weight of each parameter, we set , and to 0.25. The greater the Q score, the greater the attack overall performance. To additional confirm that our attack is extra organic than the baseline, we conducted a human evaluation study. We supply 50 pairs of comparative texts. Each and every team consists of one trigger and one baseline trigger (with or with out benign text). Workers are asked to pick out a more natural 1, and humans are allowed to decide on an uncertain choice. For every single instance, we collected five different human judgments and calculated the typical score. 4.4. Attack Benefits Table 2 shows the results of our attack and baseline [28]. We observe that our attack achieves the highest composite score Q on each of the two datasets, proving the superiority of our model more than baselines. For both optimistic and negative situations, our strategy includes a larger attack good results price. It may be located that the achievement price of triggers on SST-2 or IMDB data has reached greater than 50 . Moreover, our system achieved the most beneficial attack impact around the Bi-LSTM model educated around the SST-2 data set, having a accomplishment price of 80.1 . Comparing the models educated on the two data sets, the conclusion could be drawn: The Bi-LSTM model educated around the SST-2 information set.