GENERATIVE LANGUAGE MODELS IN SELF-EDUCATION PRACTICES: A SYSTEMATIC LITERATURE REVIEW
Abstract
This article presents a systematic review examining how generative artificial intelligence (in particular, large language models) affects self-education practices. The growing adoption of such tools for explaining study material, providing feedback, orientation in unfamiliar topics, and solving learning tasks makes this question increasingly relevant. The review synthesizes evidence on how generative models relate to self-regulation in learning, cognitive offloading, epistemic risks, and effects on human-capital formation. The methodology follows PRISMA 2020. Searches in Scopus and Web of Science, supplemented by automated discovery tools, identified 1,200 records; after screening and full-text assessment, 97 studies were included. The findings indicate that generative models are most consistently associated with faster initial orientation in new topics, easier access to explanations and feedback, and reduced short-term cognitive costs. At the same time, significant risks were identified: obtaining a quality answer more quickly does not always mean the learner has truly mastered the material. Users frequently place excessive trust in model outputs, particularly when responses are formulated confidently and coherently. Moreover, the benefits prove uneven: some learners gain genuine support, while others may develop only an illusion of understanding. The most reliable conclusions concern short-term effects, whereas the long-term impact on human-capital formation remains insufficiently studied.