Vol. 3 No. 04 (2026): INTERNATIONAL JOURNAL OF SCIENCE AND TECHNOLOGY
Articles

ANALYSIS OF GENERATIVE MODELS IN AUTOMATED FORMATION OF CORPORATE LETTER TEMPLATES FOR EDUCATIONAL SYSTEMS

Axmadaliyev Mansurbek Erkaboy o‘g‘li
Tashkent University of Information Technologies named after Muhammad al-Khwarizmi, Department of Information Educational Technologies, Trainee Teacher
Halimov Og‘abek Ibodulla o‘g‘li
Master’s student at Tashkent University of Information Technologies named after Muhammad al-Khwarizmi,

Published 23-03-2026

Keywords

  • generative models, corporate letter templates, educational systems, natural language processing, GPT-4, T5, BERT, BLOOM, automated document generation, prompt engineering, BLEU, ROUGE

How to Cite

ANALYSIS OF GENERATIVE MODELS IN AUTOMATED FORMATION OF CORPORATE LETTER TEMPLATES FOR EDUCATIONAL SYSTEMS. (2026). INTERNATIONAL JOURNAL OF SCIENCE AND TECHNOLOGY, 3(04), 44-53. https://doi.org/10.70728/tech.v3.i04.007

Abstract

The rapid development of artificial intelligence and natural language processing technologies has opened new opportunities for automating document workflows in educational institutions. This study presents a comprehensive comparative analysis of five prominent generative language models - GPT-4, GPT-3.5-Turbo, T5-Large, BERT (fine-tuned), and BLOOM-7B - evaluated on their capacity to generate high-quality corporate letter templates in educational systems. Experiments were conducted on a corpus of 200 authentic institutional letters from Uzbek higher education institutions spanning five letter types. Model performance is assessed using BLEU, ROUGE-L, and F1 metrics alongside a structured human evaluation framework covering fluency, formality, and structural accuracy. Results demonstrate that instruction-tuned large language models significantly outperform encoder-based and smaller generative models, with GPT-4 achieving a BLEU score of 42.3 and a human approval rate of 87%. The study further investigates the impact of prompt engineering strategies, showing that structured few-shot prompts improve GPT-4 performance to a BLEU score of 44.8. Findings provide actionable guidelines for educational institutions considering the deployment of generative AI for administrative document automation.

References

  1. [1] Ministry of Higher Education, Science and Innovation of the Republic of Uzbekistan. (2022). Regulations on official correspondence in higher education institutions. Tashkent: MHESI Press.
  2. [2] UNESCO Institute for Statistics. (2022). Higher education enrollment trends in Central Asia 2010-2022. UNESCO.
  3. [3] Bommasani, R., Hudson, D. A., Aditi, E., et al. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
  4. [4] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008.
  5. [5] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT 2019, 4171-4186.
  6. [6] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
  7. [7] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
  8. [8] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140), 1–67.
  9. [9] OpenAI. (2023). GPT-4 technical report. arXiv preprint arXiv:2303.08774.
  10. [10] Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ilic, S., Hesslow, D., et al. (2022). BLOOM: A 176B-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.
  11. [11] Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. Proceedings of ACL 2020, 1906–1919.
  12. [12] Zhang, T., Kishore, V., Wu, F., Weinberger, K. Q., & Artzi, Y. (2020). BERTScore: Evaluating text generation with BERT. Proceedings of ICLR 2020.
  13. [13] Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002). BLEU: A method for automatic evaluation of machine translation. Proceedings of ACL 2002, 311–318.
  14. [14] Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., et al. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730–27744.