{"id":43700,"date":"2023-06-06T07:11:29","date_gmt":"2023-06-06T11:11:29","guid":{"rendered":"https:\/\/centricconsulting.com\/?p=43700"},"modified":"2023-07-07T15:25:17","modified_gmt":"2023-07-07T19:25:17","slug":"transform-your-software-testing-methods-with-the-help-of-ai-and-chatgpt","status":"publish","type":"post","link":"https:\/\/centricconsulting.com\/blog\/transform-your-software-testing-methods-with-the-help-of-ai-and-chatgpt\/","title":{"rendered":"Transform Your Software Testing Methods with the Help of AI and ChatGPT"},"content":{"rendered":"
When I talk to our clients\u2019 developers and testing specialists about how they develop ideas about what testing should cover, some themes readily emerge. One theme focuses on meeting business needs by emulating the actions and activities of a person using the software. The other is to base testing on documented requirements or acceptance criteria, presented as \u201cthe software must do this<\/em> and not that.\u201d<\/em><\/p>\n These seem reasonable starting points. However, there are many things beyond simple acceptance criteria that need testing. Depending on how an organization approaches testing, this might not be a problem if a team handles performance, accessibility, security testing and other non-functional requirements (NFRs). If there isn\u2019t a team to handle NFRs, and testers have limited experience with them, how would they even know to think about them?<\/p>\n Another problem I see is companies assigning testers to a project with little understanding of how the software is used.<\/strong> Then, they brush this off, saying, \u201cThere are the requirements. Just test it.\u201d<\/p>\n Story or function-level requirements rarely give testers unfamiliar with the application or business environment the information needed to do meaningful integration or regression testing and avoid potential problems.<\/p>\n A fair number of people are looking to ChatGPT, AI, machine learning and other trendy tools and ideas to help solve these challenges<\/a>.<\/p>\n Can ChatGPT really help?<\/strong> Excellent question. I began experimenting with it to learn what it might \u2013 and might not \u2013 do for test ideas.<\/p>\n ChatGPT<\/a> is an artificial intelligence language model developed by OpenAI<\/a> that uses advanced natural language processing (NLP) techniques to generate human-like responses to text prompts. Data scientists trained it on a massive dataset of text, allowing ChatGPT to learn the nuances and patterns of human language and generate natural-sounding responses.<\/p>\n We\u2019ve seen how AI and ChatGPT can help with acceptance criteria<\/a> for stories. Looking for ideas around testing is similar but slightly different. When test teams are unsure about how to test something, ChatGPT can help fill the gap. To make that work, the context of the problem, specifics to the situation, and the specific ask or prompt.<\/strong><\/p>\n Let\u2019s look at an example. A tester needs to test a signon screen. They give the context of their situation, \u201cI am testing a user signon screen.\u201d Then add specifics about their situation, like \u201crequires a usersignon and password.\u201d Then the specific ask, \u201cWhat things would be good to test for?\u201d\u00a0 Here\u2019s what my request to ChatGPT looked like: If I am testing a user signon screen that requires a usersignon and password, what things would be good to test for? <\/span>\n The response I received from ChatGPT (version 3) looked like this:<\/strong><\/p>\nUsing ChatGPT to Develop Test Ideas<\/h2>\n