Open Access Paper
11 September 2023 AI technology used as a tool for enhancing university students’ English speaking skills: perceptions and practices
Wei Wang, Bin Zou, Shuangshuang Xue
Author Affiliations +
Proceedings Volume 12779, Seventh International Conference on Mechatronics and Intelligent Robotics (ICMIR 2023); 1277917 (2023) https://doi.org/10.1117/12.2689728
Event: Seventh International Conference on Mechatronics and Intelligent Robotics (ICMIR 2023), 2023, Kunming, China
Abstract
In the Internet era, Internet technology is widely involved in teaching life, and the use of mobile phone software to learn English has become an important way for many non-native learners to learn English. There is still a great demand for scholars at different stages to learn spoken English. There is a lack of methods and ways to practice oral English for university students, while offline oral English teaching tests still have many limitations. Using AI (Artificial Intelligence) to assist oral English testing, especially speech recognition technology is a new direction of oral English learning. EAP TALK is an AI oral English evaluation system based on statistical calculation, big data, language cognition and deep learning, and automatic real-time scoring. This paper takes "questionnaire Star" as the platform to design an electronic questionnaire, uses SPSS (V26.0) for statistical analysis, and conducts semi-structured interviews with students. The results show that participants have confidence in using EAP TALK for oral practice, and EAP TALK has some advantages over classroom teaching.

1.

INTRODUCTION

In this digital world dominated by the Internet, teaching and learning have become a real challenge for all those involved in the educational process. The widespread emergence and intervention of technology and the Internet have promoted the development of education and provided convenience for a better teaching and learning environment1. The rational use of technology can effectively transform the whole English learning process into a highly efficient and convenient learning process, so as to improve the level of English learning. Spoken English is not only an important part of language structure, but also a bridge of communication between people. In order to meet the requirements of the times and achieve the goal of cultivating comprehensive English ability, oral English learning has put forward higher requirements for scholars at different stages. The platforms and ways for students to acquire oral knowledge are diversified. By using the artificial intelligence speech recognition system, students can not only simulate and improve the reading standard of the system by means of fluency, pronunciation, integrity and other language evaluation dimensions, but also improve the efficiency of oral English learning. In addition, the system can also help students to perceive the charm of spoken English and improve their interest and ability in spoken English. China intends to invest heavily in the development of artificial intelligence tools for educational purposes in order to play a leading role in this field2. In the field of language classroom teaching, artificial intelligence-assisted oral English testing is a new thing in college English teaching reform. with the gradual deepening of college English teaching reform, various colleges and universities have paid more and more attention to carrying out oral English teaching. Many colleges and universities have introduced multimedia audio-visual teaching software to simulate the target context to increase students’ opportunities for oral English practice. or establish a foreign language web-based learning platform to supplement and strengthen oral English teaching. EAP talk is an artificial intelligence oral English evaluation system based on statistical calculation, big data, language cognition and deep learning, and automatic real-time scoring developed by Zoubin, who is a professor in Xi’an Jiaotong Liverpool University. Based on the above background, taking the use of EAP talk as an example, this paper investigates and statistics the effects and attitudes of freshmen and sophomores in one university in mainland China and will deeply explore the theories and methods of artificial intelligence speech recognition-assisted oral test, so that the artificial intelligence oral test can scientifically interact with people, improve oral English ability, and promote the active construction of students’ language knowledge structure to improve their confidence in oral English learning 3.

2.

LITERATURE REVIEW

2.1

The effect of artificial Intelligence in Oral English Learning

According to the existing research, the current oral English teaching content and knowledge are so complex and huge that it is difficult for students to determine their own advantages and disadvantages in oral English. The cause of this phenomenon is that students are unable to get effective feedback in their daily oral English learning4. Students need a software to help them learn the English language, which should play the role of “scaffolding” to help students develop their strengths and improve their spoken English5. A perfect oral English teaching system should have a relatively perfect hierarchical division, by dividing the users into different levels, providing exercise materials consistent with the corresponding levels, and giving timely feedback according to the set parameters after the exercise, so as to improve the pronunciation quality of the users. However, it is very important to integrate the pronunciation patterns of different mother tongues into oral English teaching and evaluation. its purpose is to prevent students from alienating their mother tongue and to change the current research on oral English teaching6. To this end, researchers related to artificial intelligence software recognition have been looking for breakthroughs in how to recognize native non-English speech accents and improve computer programs to ensure that users in different countries can be accurately identified. and give correct feedback7.

2.2

Research trends of automatic oral English assessment methods based on deep learning technology at home and abroad

In order to study how to integrate spoken English into the syllabus design, Zou, Li and Li selected 84 students from four classes of economics and marketing majors in a university in China to take an EAP course with an English CEFR level between B1- B28. Most of the participants are active in the classroom, the teacher-student interaction is more frequent, the effect is good, and get valuable research materials. The results of the survey show that there is no obvious exclusion to the use of APP in oral practice, and more participants will feel fresh about this new way of learning. Some participants reported discomfort from staring at the electronic screen for a long time, indicating that the page design in the application still needs to be improved. When using the software, when the fluctuation of the wireless local area network is large, it is easy to cause the application program to be offline and reconnect, which can not run stably and smoothly9. The emergence of this problem is not only the high maintenance cost of related digital technology, but also some more complex instructions that often restrict users’ access to the software due to geographical reasons. According to statistics, more and more Chinese college students choose to use mobile phones in class to improve their English proficiency and independent learning ability.

3.

METHODOLOGY

3.1

Research questions

Inspired by the existing the AI speech evaluation program-EAP research, this study focuses on the effectiveness of the AI speech evaluation program-EAP in improving oral English skills and the influence of EAP software on oral English development model.

  • (1) In what ways can the AI speech evaluation program-EAP Talk help EFL learners develop speaking skills online?

  • (2) What are the existing problems for the AI speech evaluation system when using EAP Talk for practicing speaking skills online?

3.2

Research subject and tools

3.2.1.

Speech recognition and machine learning

As a speech extension, speech evaluation is one of the downstream tasks of speech recognition, which automatically evaluates the pronunciation level through intelligent speech technology, which requires local explanation, that is, the comparison of departure sound errors, defect location and problem-solving analysis.

In 2016, Ribeiro of the University of Washington (University of Washington) gave the definition of black box-based interpretation (Model-Agnostic Interpretability) in machine learning. Further discussion on interpretability is becoming more and more important in machine learning. The paper “Why I believe you” by Ribeiro et al10. explains the prediction of the classifier “why should I trust you?”: Explaining the predictions of any classifier. A local interpretation framework called LIME (Local interpretable model-agnostic explanations, LIME) is proposed. The algorithm uses an explanations alternative model to make local prediction. These alternative models do not necessarily have better accuracy but have better transparency. The better the transparency of the predicted results of the classifier or regression quantity, the more human beings can understand the explanation given by the model in their own way. For the black box model, as long as we input a training data, the black box model can output a prediction.

LIME first selects one of many models with good interpretability, such as Lasso or decision tree or linear regression model, and then locally perturbs (or permutes) the data points of the interested sample instances to get a new data set composed of disturbed instances. Finally, a better interpretable model is trained for this new data set LIME, which is a local substitution model (Local surrogate models). Suppose x is the data point of the sample instance of interest, G is the set of all alternative transparent models, the specific transparent model is g, and the loss function L is selected, then theoretically, Ribeiro et al give a specific single sample instance x and its corresponding prediction model g mathematical formula as follows. As a function of transparent model g, the number of participating features is required to be as small as possible, which is a sparse representations measure. The disturbance measure represents the size of the new data set. From the traditional neural network to the current deep learning technology, what is the key to distinguish them? Literally, it is the word “structure”, because the traditional neural network is also called shallow learning technology. But now more and more people think that deep learning is essentially a new way of programming. In 2015, the DeepMind team published a research paper entitled “Neural Program interpreter” (Neural Programmer-Interpreters) in ICLR’16, which was one of the best papers of the year. As a recursive synthetic neural network, the core module of NPI is a sequence model based on LSTM. NPI is a program-memory neuro-Turing machine (Neural Turing Machines), which has three learning capabilities: a recursive kernel with unknown tasks, an embedded continuous key program memory for learnable programs, and a domain-specific encoder that enables a single NPI to provide completely different functions in multiple perceptual environments. The program is generated by the controller RNN one operation at a time, and each operation is constructed on the basis of the existing program to generate an operation sequence. Because attention is introduced on the basis of probability distribution, the output of the program is differentiable in probability. So NPI learned to generate programs without the correct program paradigm. When using LIME software package, similar to the neural program interpreter proposed by Scott Reed, the corresponding interpreter should be built first when the program is implemented.

Explainer=lime.lime_tabular.LimeTabularExplainer (class_names=y_train.unique.

Values. Values.tolist (), values.tolist ().

Categorical_features, categoriesical_names, Kernel width).

The parameters used by this function are:

X_train = training set. Connection list of all feature_names= feature names. Class_names= target value. List of classified columns in the categorical_features= dataset. List of categoriesical_names= classification column names. Kernel width= parameters control the linearity of the induced model, the larger the width of the model, the greater the linearity. The new data set obtained from the disturbance example is a data enhancement technique. Basic data enhancement techniques include a series of fixed transformations (such as horizontal flipping, filling, and cropping). Specific to the speech evaluation here, the key is the combination, that is, to properly replace some speech segments and test their scores. Combined with the neural program interpreter, the computer can learn the program, so that the machine has the ability of self-evolution. Then the locally interpretable alternative model specified in the LIME package should be self-constructed.

3.2.2.

Questionnaire

The freshmen and sophomores who are non- English majors in a certain university are selected as the experimental subjects. The entrance scores of these students are in the middle level in the whole school and are representative to a certain extent. Among them, there are 65 boys and 100 girls, aged 18 or 19 years old. 130 students are from the city where the school is located, and the others are from the province outside the city. A total of 160 students participated in this study, but only 110 students participated in the whole experiment and all the corpus could be analyzed. Based on the fact that the high dropout rate may not be random, resulting in that the final sample can not objectively represent the initial sample and the research object population 12. At the end of the experiment, the independent sample T test of the students who dropped out and the remaining students was done, and it was found that there was no significant difference between the two groups. The purpose of this study is to help researchers draw the usefulness and limitations of EAP Talk through the evaluation of the program after the periodic use of EAP Talk. In the course of the experiment, a mixed method of questionnaire survey and semi-structured interview is used, and the qualitative results of semi-structured interview are helpful to analyze and explain the quantitative results of the questionnaire. The interview questions are designed around the perception of students’ oral skills and software restrictions after using EAP TALK, and then qualitative data are collected and analyzed to explain the results of the questionnaire, so as to collect students’ rich cognitive feelings after using EAP TALK, so as to obtain information about the practicability of EAP Talk in improving oral English skills and the limitations of its use in practice. The experimental data were collected by a specially designed questionnaire through the “questionnaire Star” program, which aims to further elicit the relevant research objectives for appropriate answers. There are 18 specific questions in the questionnaire, which are carried out by the following three dimensions:

Dimension I: perception of the improvement of oral English by EAP TALK.

Dimension II: perception of the relationship between EAP TALK and offline Teaching.

Dimension III: perception of the functions of EAP TALK software.

Participants’ views were graded on a Likert five-point scale11with questions ranging from “Strongly Disagree” to “Strongly Agree”.

After the questionnaire was collected, the data were statistically analyzed.

The five-point score of the scale is as follows:

Strongly disagree = 1.

Disagree = 2.

Neutral = 3.

Agree = 4.

Strongly agree = 5.

3.2.3.

Interview

Subsequently, we randomly selected six students in the sample to conduct a semi-structured interview to explore the results of the questionnaire and to understand the students’ views on the limitations and follow-up development of the software after using EAP Talk.

The details of the interview are as follows:

  • A. Students’ understanding of the functions of EAP Talk software.

  • B. Frequency of using EAP Talk software.

  • C. Students’ opinion on the developability and advantages of EAP Talk software in the future market.

  • D. Students discuss the limitations of EAP Talk software.

The contents of the interview were transcribed, and the answers of all participants were initially coded in the same order as the interview questions, and then analyzed. The student interviewee number is S1 (student 1, 2, etc.) and the gender is Whip M (female / male).

3.3

Validity and reliability

In order to ensure that the structure of the questionnaire and the questions raised are in line with the purpose, we tested its reliability and validity.

In terms of reliability, SPSS (Version 26.0) is used to calculate Cronbach’s alpha of internal consistency.

As shown by the Table 1 data, the Cronbach’s alpha of the three dimensions is 0.863 and 0.832, respectively. The results well reflect the reliability of the questionnaire items and scores. Table 2 shows that the value of KMO is 0.898 > 0.8. it shows that the validity is high.

Table 1.

The reliability of each dimension.

DimensionCronbach’s AlphaNo. of Items
I0.8636
II0.7564
III0.8324

Table 2.

The validity of the test

KMO and Bartlett’s Test
Kaiser-Meyer-Olkin Measure of Sampling Adequacy.0.898
Bartlett’s Test of SphericityApprox. Chi-Square829.256
df91
Sig.0.000

4.

RESULTS

4.1

Population background information

The demographic background information of the participants from the first part of the questionnaire is shown in the following table 3:

Table 3.

Demographics of Participants.

  Frequency%
GenderMale2724.5
Female8375.5
Age1710.9
183430.9
192623.6
202421.8
211412.7
2265.5
2332.7
2810.9
2910.9
How long have you been using EAP TALK to practice your spoken English?110.9
10109.1
141311.8
152018.2
1910.9
210.9
201210.9
2154.5
2210.9
2521.8
310.9
303834.5
4010.9
743.6
How often, if ever, do you use EAP TALK to practice EAP speaking skills?Everyday5348.2
More than twice a week4440.0
Once a week1311.8
Total110100.0

According to the data in Table 3, there are 27 male participants and 83 female participants, and most of the participants are female (75.45%), which is related to the overall male-to-female ratio of the university of medicine. Most of the participants were between the ages of 18 and 22, and more than half experienced for more than two weeks or even a month (38 for 30 days and 20 for 15 days). The data also showed that nearly half of the participants used EAP TALK every day (48.18%), 40% of the participants used it at least twice a week, and only 13 (11.82%) used it once a week. The descriptive statistical analysis of the questionnaire data is carried out with SPSS V26.0, and the results are as shown in Table4:

Table 4.

The results of descriptive statistical analysis of the questionnaire data.

QuestionNumberMeanStd.devitationVarianceMinimumMaximum
ValidMissing
1.Do you think EAP TALK is a convenient tool for improving EAP speaking skills?11003.950.5880.34625
2.Using EAP TALK can improve my overall ability of academic spoken English.”11003.900.6490.42125
3.Using EAP TALK can improve my fluency in academic spoken English.11003.950.5390.29125
4.Using EAP TALK can improve my pronunciation of academic spoken English.11003.810.7100.50525
5.Using EAP TALK can improve my oral English of CET-4.11003.940.5950.35425
6.Using EAP TALK can improve my CET-4 listening ability.11003.810.6130.37625
7.Do you think EAP TALK cannot replace face-to-face teaching?11003.610.7550.57125
8.Compared with classroom teaching do you think EAP TALK is more helpful to improve oral English?11003.800.7010.49225
9.Do you think EAP TALK can make up for the lack of classroom teaching?11004.070.5540.30725
10.Does using EAP TALK help to learn college English?11004.050.5220.27235
11.Do you think EAP TALK is easy to use?11003.460.7250.52625
12.Do you think the scoring of EAP TALK can reflect the real level of your oral ability?11003.610.6920.47925
13.Are you satisfied with the voice recognition ability of EAP TALK?11003.890.6260.39225
14.Are you satisfied with the classification of EAP TALK?11003.600.6800.46215

Using 3 as the midpoint of the average value scale (mean scale), as shown in the table, most items are higher than the midpoint, which means that the participants agree with most of the questions surveyed. The average value of the title of dimension I, that is, questions 1-6, reflects that EAP TALK is of great help in improving the overall ability, fluency and pronunciation of academic oral ability. EAP TALK is a convenient tool to improve academic oral ability. The average values of questions 5 and 6 show that EAP TALK is helpful to improve CET-4 oral ability and listening ability. The dimension II, that is, the average value of question 7-10, shows that although most participants believe that EAP TALK can improve oral English more than classroom teaching and can make up for the deficiency of classroom teaching. However, as shown in table 5, most students agree that the use of EAP TALK to practice academic oral English can not replace face-to-face teaching. Figure1 shows that the mean for the attitude test was M=3.61(SD=0.755 N=110).While 52.7% of participants believed EAP TALK cannot replace the face-to-face teaching, 30.9% of participants remain neutral attitude.

Figure1.

Attitudes and knowledge: EAP talk for speaking skills

00083_PSISDG12779_1277917_page_7_1.jpg

Table 5.

The statistical results of question 7.

7. Do you think EAP TALK cannot replace face-to-face teaching?
Valid FrequencyPercentValid PercentCumulative Percent
Disagree98.28.28.2
Neutral3430.930.939.1
Agree5852.752.791.8
Strongly Agree98.28.2100.0
Total110100.0100.0 

Dimension III, that is, question 11-14 focuses on the participants’ satisfaction with the experience of using EAP TALK (Table6). According to Table6, more than half of the participants agree that EAP TALK is easy to use, and EAP TALK can reflect their true level of spoken English. On the question of “whether or not satisfied with the speech recognition ability and classification of EAP TALK”, more than half of the people chose Agree. (65.5% 59.1%) indicates that most participants are satisfied with the voice setting ability and classification of EAP TALK.

Table 6.

The participants’ satisfaction with the experience of using EAP TALK.

  FrequencyPercentValid PercentCumulative Percent
11.Do you think EAP TALK is easy to use?Disagree1110.010.010.0
Neutral4137.337.347.3
Agree5449.149.196.4
Strongly Agree43.63.6100.0
12.Do you think the scoring of EAP TALK can reflect the real level of your oral ability?Disagree65.55.55.5
Neutral3834.534.540.0
Agree5953.653.693.6
Strongly Agree76.46.4100.0
13.Are you satisfied with the voice recognition ability of EAP TALK?Disagree21.81.81.8
Neutral2220.020.021.8
Agree7265.565.587.3
Strongly Agree1412.712.7100.0
14.Are you satisfied with the classification of EAP TALK?Strongly Disagree10.90.90.9
Disagree54.54.55.5
Neutral3531.831.837.3
Agree6559.159.196.4
Strongly Agree43.63.6100.0
 Total110100.0100.0 

4.2

Part III data: statistics on the total scores of participants

The full score of each question in the questionnaire (all questions scored by Five-point Likert scale) is 5, and the total score of all questions is 70.

The total score of each participant is calculated by SPSS V26.0, and the Table7 is as follows:

Table 7.

The total score of each participant.

The total score of each participant
Valid FrequencyPercentValid PercentCumulative Percent
35.0010.90.90.9
40.0010.90.91.8
41.0010.90.92.7
42.0043.63.66.4
43.0021.81.88.2
44.0032.72.710.9
45.0010.90.911.8
46.0021.81.813.6
47.0010.90.914.5
48.0054.54.519.1
49.0043.63.622.7
50.0032.72.725.5
51.0087.37.332.7
52.0054.54.537.3
53.0098.28.245.5
54.00109.19.154.5
55.001210.910.965.5
56.001412.712.778.2
57.0065.55.583.6
59.0032.72.786.4
60.0021.81.888.2
61.0032.72.790.9
62.0021.81.892.7
63.0010.90.993.6
64.0010.90.994.5
65.0010.90.995.5
66.0010.90.996.4
67.0021.81.898.2
 70.0021.81.8100.0

The total full score of all the questions is 70, and the passing score is 42. As shown in Table 3, only three participants have a score lower than 42, and most of the participants have a high score in the questionnaire, which shows that most of the participants have high recognition of the questions in the questionnaire and are confident that EAP TALK can improve their oral English ability.

5.

DISCUSSION

In this study, a questionnaire was used to observe the participants’ identification with EAP TALK through three dimensions: “perception of the improvement of oral English by EAP TALK”, “perception of the relationship between EAP TALK and offline teaching” and “perception of the function of EAP TALK software”. Firstly, the reliability and validity of the questionnaire were analyzed by SPSS V.26. According to the data collected by the questionnaire, through SPSS V.26 to analyze and study, the analysis shows that most of the participants have a high score, that is, a high degree of recognition of the topic. Most of the participants hold positive views on the practice of academic oral English with EAP TALK software, which is easy to use and can effectively improve the level of spoken English. However, in terms of the perception of EAP TALK and offline teaching, most participants believe that EAP TALK can not replace face-to-face teaching, but EAP TALK can make up for the shortcomings of classroom teaching; and compared with classroom teaching, EAP TALK is more helpful in improving oral English. EAP talk is an artificial intelligence oral English evaluation system based on statistical calculation, big data, language cognition and deep learning, and automatic real-time scoring. Previous studies focused on macro-level topics such as “students and teachers’ perception of artificial intelligence-assisted teaching”. This study starts with an easy-to-use academic oral English test APP--EAP TALK, specifically discusses the participants’ perception of the APP function and auxiliary teaching ability, and explores the participants’ overall views on online oral English teaching through the experience of using the APP.The semi-structured interview also reflects EAP TALK’s positive attitude towards improving oral English. Although the scores of most of the participants are high, the semi-structured interview part also reflects that EAP TALK still has many shortcomings, such as inaccurate software scoring affects the efficiency of use, the recognition of word pronunciation is not precise enough, and so on. It shows that there is still some room for improvement in the software.

6.

CONCLUSION

The study explores the practicability of students to improve their oral English skills by using EAP TALK software. The survey results show that the possibility that students generally disagree with the EAP TALK application may replace face-to-face teaching, especially in view of their current limited ability to correctly recognize the pronunciation of non-native English speakers, so poor speech recognition affects accurate performance13. The results of this study show that, on the whole, the participants’ evaluation of EAP TALK software in oral development is positive, although there are some limitations. The current research initially shows that EAP TALK software has a positive impact on helping users improve their oral English skills, which suggests that researchers should make improvements according to the relevant questions raised by the research objects when they continue to develop the software in the future. The current study is regarded as a preliminary step in the further study of AI-EAP oral skills development strategies. It is hoped that this study will promote the interdisciplinary research of EAP artificial intelligence technology and contribute to language-related academic research at the local and international levels 14. In addition, collecting students’ views on specific language learning skills may provide information for teacher training to combine a focused approach with EAP teaching and learning-related technologies.

ACKNOWLEDGMENTS

Fund Programme : First-class undergraduate courses of Shaanxi University of Chinese Medicine--College English One(111-172040321019); The eleventh batch of China Foreign language Education Fund (ZGWYJYJJ11A014)

REFERENCES

[1] 

Celce-Murcia, M., Brinton, D. M., Goodwin, J. M., Griner, B., “Teaching pronunciation. Hardback with audio CDs (2): A course book and reference guide,” Cambridge University Press, Cambridge (2010). Google Scholar

[2] 

Dlaska, A., Krekeler, C,., “Self-assessment of pronunciation,” System, 36 (4), 506 –516 (2008). https://doi.org/10.1016/j.system.2008.03.003 Google Scholar

[3] 

Dornyei, Z., “Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies,” Oxford: Oxford Applied Linguistics, (2007). Google Scholar

[4] 

Knight, W., “China’s AI awakening,” The West should stop worrying about China’s AI revolution., (2017) https://www.technologyreview.com/s/609038/chinas-ai-awakening/ Google Scholar

[5] 

Kudus, N., Razali, W., “Mobile phones and youth: Advocating media literacy and awareness through semiotic analysisTeaching and learning language: Current trends and practice,” Universiti Sains Malaysia, Pulau Pinang, Malaysia (2011). Google Scholar

[6] 

Abugohar, M. A., Yunus, K., Ab Rashid, R., “Smartphone Applications as a Teaching Technique for Enhancing Tertiary Learners’ Speaking Skills: Perceptions and Practices[J],” International Journal of Emerging Technologies in Learning (iJET), 14 (9), (2019). https://doi.org/10.3991/ijet.v14i09.10375 Google Scholar

[7] 

Murphy, J. M., Intelligible, comprehensible, non, (2014). Google Scholar

[8] 

McCrocklin, S. M., “Pronunciation learner autonomy: The potential of automatic speech recognition,” System, 57 25 –42 (2016). https://doi.org/10.1016/j.system.2015.12.013 Google Scholar

[9] 

Pallant, J., SPSS survival manual: A step by step guide to data analysis using IBM SPSS, McGraw-Hill Education, Maidenhead, Berkshire (2013). Google Scholar

[10] 

Ribeiro, M. T., Singh, S., Guestrin, C., ““why should I trust you?”: Explaining the predictions of any classifier [EB/OL],” In Knowledge Discovery and Data Mining(KDD), 1135 –1144 (2016) https://arxiv.org/abs/1602.049 Google Scholar

[11] 

Tri, D., Nguyen, N., “An exploratory study of ICT use in English language learning among EFL university students,” Teaching English with Technology, 14 (4), 32 –46 (2014). Google Scholar

[12] 

Wei, W., Lun, M., Yong-An, L., Qianqian, Q., “An Analysis of AI Technology Assisted English Teaching Based on the Noticing Hypothesis,” in 2nd International Conference on Artificial Intelligence and Education (ICAIE), Dali, China, 158 –162 (202120212021). Google Scholar

[13] 

Zou, B., Liviero, S., Hao, M., et al., “Artificial intelligence technology for EAP speaking skills: Student perceptions of opportunities and challenges[J],” Technology and the psychology of second language learners and users, 433 –463 (20202020). https://doi.org/10.1007/978-3-030-34212-8 Google Scholar

[14] 

Zou, B., Li, H., Li, J., “Exploring a curriculum app and a social communication app for EFL learning,” Computer Assisted Language Learning., 31 (7), 694 –713 (2018). https://doi.org/10.1080/09588221.2018.1438474 Google Scholar
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Wei Wang, Bin Zou, and Shuangshuang Xue "AI technology used as a tool for enhancing university students’ English speaking skills: perceptions and practices", Proc. SPIE 12779, Seventh International Conference on Mechatronics and Intelligent Robotics (ICMIR 2023), 1277917 (11 September 2023); https://doi.org/10.1117/12.2689728
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Electroactive polymers

Machine learning

Artificial intelligence

Data modeling

Back to Top