Keynote: CodeCamp_Brasov conference.
Abstract: Testing AI: Five Obstacles and Seven Workarounds
There’s an incredible amount of noise around artificial intelligence these days, but very little reliable signal. AI will bring doom and destruction, or a world where cheerful robots feed us peeled grapes while we lie on the couch. Some say AI is already replacing the jobs of creative people; others say creative people will improve their jobs by using AI. Meanwhile, it’s not even clear on what people even mean by AI. How do we make sense of it all?
Serious testers know how to make sense of things: we test. That’s sometimes tricky, and AI in particular brings some special problems. AI is by nature obscure and fragile. It intrudes into human social like, it comes packaged with wishful claims, and the current flood of hype makes criticism socially challenging.
The good news is that real testing can step up and work around the obstacles. The skills of analysis, investigation, and critical thinking — and an understanding of testing testing’s essential missions — apply to any socio-technical product. Now testers and developers need to apply these skills more than ever. One thing is certain: traditional, formalized, procedurally structured test cases won’t do the job. And something else is certain, too: perspectives will change between the writing of this abstract and the delivery of this talk!
Michael Bolton will describe his experiences of analyzing AI- and LLM-based products, and will offer a set of heuristics to address the challenges that AI presents in today’s ever-changing technological landscape (ahem!).
Master classes at Codecamp:
Testers and Developers, Alone and Together
Rapid Software Testing Focused: Automation