AI as a Living Ecosystem: In Conversation with Alexandru Dan, AI Advisor for Brainactive

July 7, 2025

10 minutes

Written by

George Ganea

Connect on LinkedIn

AI in market research

Conversational AI applications

AI and decision making

AI ethics and regulation

Future of work and AI

Alexandru Dan is a technology evangelist with a rare ability to translate the complexities of artificial intelligence into practical, business-driven innovation. As AI Advisor for Brainactive and founder of multiple startups, including Arcanna AI and Remote Labs, Alexandru brings decades of hands-on experience in systems architecture, product development, and applied AI. He has led over 40 development teams across organizations like eMag, the European Commission, and Infor, and now serves as both CEO of TVL Tech and lecturer in AI at ASE. In this conversation, we explore his vision of AI not just as a tool, but as a living ecosystem—one that is already reshaping the way we research, decide, and work.

🧠 I. Personal Vision and Professional Journey

George Ganea: You’ve had an unusual career path – math olympiad medalist, teacher, entrepreneur, researcher. How has that shaped your relationship with AI?

Alexandru Dan: I think this journey gave me a deep trust in continuous learning. I’ve gone through many roles – teaching, research, launching projects in education and AI – and all of that helped me see technology not just as a tool, but as a dialogue partner, a creative companion. For me, AI is an extension of the mind: I use it to test hypotheses, build things, fail fast, iterate faster. My teaching background helps me appreciate lifelong learning, while research taught me not to get excited too quickly, but to test and validate carefully.

GG: What motivates you most in working with AI – curiosity, innovation, or social impact?

AD: It’s the mix. Curiosity is the engine, but social impact gives direction. I’m fascinated by the idea that you can build a model that learns on its own. But I’m also concerned about what that model does with that knowledge. AI isn’t just about code anymore – it’s about decisions, trust, responsibility. That’s where the “wow” moments come from, but also a lot of ethical dilemmas.

GG: You founded the tech academy at eMag and helped train generations of young talent. How do you see the connection between education, research, and industry in AI?

AD: Education is the invisible infrastructure of any technological leap. Without people who understand how models work, what bias is, or how to build clean workflows, you’re in trouble. Research brings depth, industry brings speed, and education builds the bridge. Sadly, we still see them as silos. I advocate for hybrid, collaborative teams that learn in the process.

GG: What does a typical day look like for you now, juggling your roles as CEO, teacher, consultant, and advisor?

AD: I split my time across projects. Morning might be a consultancy call, lunchtime I’m teaching at a university, evening I’m testing a new tool or writing a custom prompt. I try to carve out at least one hour a day without calls – a space to “listen” to technology and ask it questions. When you have kids, teaching, learning and business, all in one calendar, you learn to prioritize quickly.

GG: How has your view of AI changed in the last five years?

AD: Radically. Five years ago, I saw AI as a set of algorithms. Now I see it as a living ecosystem. We’ve moved from “tool” to “co-author.” GPT was a turning point – it showed that AI can have intuition, write coherently, and even become dangerous if not carefully calibrated. Today, we’re seriously discussing superintelligence – a few years ago it sounded like science fiction. Now it’s on the roadmap at OpenAI.

🧩 II. AI as Conversational Agent and Solution Architect

GG: In a previous talk, you asked Daniel: “Why do we still need written questionnaires? Why not just talk to it?” Can you elaborate?

AD: Sure. It’s a simple idea: why make someone fill out a form when you can have a natural conversation? Imagine answering research questions while driving to work. Instead of “forced responses,” you get intentions, nuance, context. Technologically, we’re almost there. GPT already has a voice. The next step is having it send emails, trigger workflows, act like a real assistant – a Jarvis.

GG: How close are we to having an AI that “knows” market research like a human expert?

AD: Depends on what “knows” means. Structurally – tone, patterns, logic – we’re close. But intuition, contextual judgment? That requires deliberate fine-tuning. Still, it’s clear that AI is learning fast. In a few years, it’ll be a solid companion for any researcher.

GG: What would an AI that thinks like a researcher look like? How do we build it?

AD: First, it needs a mental model of objectives: what it’s looking for, why it matters, how it defines “insight.” Then, it has to learn to ask good questions – not just give answers. We build that through constant exposure to real cases, critical evaluation layers, and a memory that tracks what works. Not easy, but doable.

GG: Why is it important to see AI as a collaborator, not just a tool?

AD: Because you use a tool and then turn it off. A collaborator challenges you, complements you, makes you better. If we limit AI to “just a tool,” we miss the potential for mutual learning. I use AI as a sounding board – to test hypotheses, to explore unexpected angles. And sometimes, it gives me better ideas than I had.

GG: What are the current limits of conversational AI in professional contexts?

AD: Trust. Models hallucinate. They can confidently claim 2+2=5 if prompted badly. So you need brakes – rules, validations, sometimes even a second AI to verify the first. And always: human in the loop. In medicine, banking, hiring – you can’t leave final decisions to a statistical model.

🏢 III. AI in Business, Research, and Decision-Making

GG: You come from outside our industry. What do you see that’s often invisible from within, when it comes to AI in market research?

AD: I often notice a kind of rigid thinking, rooted in traditional methods. Many researchers still see AI as a risk rather than a resource. For instance, insisting that AI must follow every traditional step – pre-test, questionnaire, sampling, dashboard – when AI could reshape the entire process. Maybe you no longer need open-ended questions if AI can infer opinions from free language. Maybe you don’t need tables if AI can deliver insights tailored to each stakeholder’s role. From inside the industry, disruption is hard to spot because the methods still seem to work.

GG: How can a well-calibrated AI go beyond analyzing data to actually transforming business decisions?

AD: Through context. AI is valuable not just when it tells you “what happened,” but “what could happen if...” A well-calibrated model can analyze patterns, anticipate outcomes, and even suggest actions. It can say: “I recommend this strategy because in 84% of similar cases, it led to a 12% growth.” That’s not just decision support – it’s augmented critical thinking.

GG: What would an “insight-as-a-service” built on AI look like?

AD: First, it should be conversational – you talk to it. Then, it needs memory and context – to remember who you are, your past projects, your interests. Then, explainability – it should show how it reached its conclusions. And finally, adaptability – able to switch between industries, from plain to technical language. A digital researcher who doesn’t sleep, doesn’t get tired, and learns from every interaction.

GG: What elements do we need to combine to build a sustainable research “stack”?

AD: Data quality is the core. If the input is flawed, any AI layer becomes toxic. Then, well-trained models, yes – but also professional prompt engineering (which is a craft on its own). Add ergonomic, transparent interfaces, human oversight, and auditability. Most importantly, include people who know how to ask the right questions – not just read the answers. It’s an interdisciplinary effort.

GG: Can AI support “sensemaking” – not just raw data processing?

AD: Yes – but only if calibrated with purpose. Current models can synthesize dozens of pages, build coherent narratives, find subtle patterns. But they can’t decide what’s meaningful for a business. That’s where the hybrid comes in: AI shows what’s visible, humans decide what matters. Sometimes, AI can even help us see what we missed – and that might be its greatest value.

📉 IV. Data Quality, Synthetic Respondents, and Emerging Risks

GG: How do you see the problem of data quality in the context of automated generation or AI-based interpretation?

AD: It’s a huge problem. Because AI magnifies errors. If you feed in bad data – from poor sampling, false responses, or weak tools – AI will produce confident conclusions on fragile foundations. That creates the illusion of rigor when there’s only smoke. So inputs need to be cleaned, verified, validated. No shortcuts.

GG: Is there a risk we’re building false conclusions on seemingly clean but deeply flawed data?

AD: Absolutely. In fact, it’s already happening. Models provide “good” results, but if you look at the foundation, it’s based on false assumptions. Bias is sneaky. Sometimes it’s in the question phrasing, sometimes in data sources, sometimes in what isn’t asked. AI can mask this bias with syntactic coherence. But coherence isn’t truth.

GG: What’s your take on “synthetic respondents”? Where’s the line between useful and dangerous?

AD: I find synthetic respondents fascinating – especially for testing, exploration, quick validation. But the line is clear: we mustn’t confuse them with real humans. They’re statistical models, not social beings. They can simulate opinions but don’t live the consequences of decisions. So yes, simulation can go far – but we mustn’t mistake it for reality.

GG: How far can we go with automation without losing trust in the results?

AD: It depends on how you communicate what AI is doing. If the user understands the model’s capabilities and limitations, trust holds. But overpromise and deliver hallucinations, and trust evaporates. Automation is good when paired with transparency, control, and clear feedback loops.

GG: How do we guard against bias, hallucination, privacy issues, and confusion in an AI-driven ecosystem?

AD: With both automated and human checks, trust scores, decision logs, and the right to “see” how AI reasoned. Plus, education – users need to understand what they’re asking and how to interpret responses. Without a culture of critical thinking, AI becomes a “false oracle.”

🔍 V. The Future of Professions: Researcher, Analyst, Consultant, Developer

GG: What happens to market researchers in a world where AI can generate data, conclusions, and even storytelling?

AD: They don’t disappear – their role transforms. They become research architects, curators of meaning, masters of asking good questions. They won’t spend time compiling spreadsheets but will focus on identifying what’s truly insightful, what deserves to be shown, and what needs deeper investigation. Researchers become strategists.

GG: Is there still room for the “classic developer” in the era of generative code?

AD: Yes – but the role shifts. They become system designers, coherence referees, AI supervisors. Generative code handles the grunt work, but architecture, scalability, security – those remain deeply human. The “classic” coder who ignores AI will struggle, but the one who adapts will have more power than ever.

GG: What does the next generation of professionals need to know: AI, psychology, logic, data? What can’t they skip?

AD: Logic and ethics. Without them, you’re just a button pusher for models. They need to understand how AI works, yes – but also how the human mind works, how trust is built, how a meaningful question is crafted. Multidisciplinarity isn’t a bonus anymore – it’s a survival skill.

GG: What new roles will emerge in the next 3 years in companies that take AI seriously?

AD: Prompt engineers, AI product owners, knowledge curators, ethics advisors, feedback-loop designers. Roles that don’t even exist in current job codes. They’ll sit at the crossroads between technical, psychological, and strategic domains.

GG: How do you see the hybrid evolution of teams: human + AI + platform?

AD: Like an orchestra. The human is the conductor, AI is a powerful section of instruments, and the platform is the score and the concert hall. If you bring them together wisely, you get a symphony. If not, you get noise.

🧭 VI. Ethics, Regulation, and Responsibility

GG: What does ethical AI look like, from your perspective?

AD: An AI that can say: “I don’t know” or “I can’t answer that.” One that’s transparent, auditable, and predictable. An AI that knows its own limits – and yours. Ethics isn’t a fixed rulebook – it’s a dynamic framework built together with users.

GG: What regulations do you consider essential to protect users without stifling innovation?

AD: Mandatory transparency, audit rights, explainability in automated decisions, personal data protection, and clear boundaries around deepfakes or identity simulation. At the same time, these regulations must be written by people who understand the tech – otherwise, they’re either ineffective or counterproductive.

GG: What does real transparency mean in an AI product – and how can we communicate it without technical jargon?

AD: It means you can ask: “How did you get to this answer?” – and receive an explanation you can understand. It means knowing what data the model was trained on, who built it, what biases it might carry. It’s about clarity, not jargon.

GG: Can AI help repair polarization and social fragmentation – or will it make things worse?

AD: It can do both. AI can translate, unify discourse, filter hate speech. But it can also amplify societal ills if misused. It’s an amplifier: if the input is polarized, the output can be explosive. It all depends on what we feed into it.

GG: If you could leave just one message about AI responsibility – what would it be?

AD: AI is not a force of nature. It’s created by humans, trained by humans, used by humans. If we want AI to be our ally, we have to treat it with both respect and critical thinking. We can’t offload responsibility just because the machine seems smarter. It’s still on us.

image 33image 32
PSST!
DataDiggers is here
Looking for a high quality online panel provider?
Request a Quote
Request a Quote