Our vision

Generative AI is the most powerful yet potentially dangerous technology today. We aim to harness the power of AI on the internet, becoming a companion for users like you. Our mission is to enhance critical thinking through the use of AI.

Generative AI should be

  • Safe
  • Fair
  • Ethical
  • Sustainable
  • Reliable

No one should fear the use of Generative AI.

Using Generative AI fairly ensures objective outcomes for all.

Ethical AI is a top priority when using Generative AI.

Increasing usage of Generative AI means finding new ways to ensure sustainability.

Maintaining accuracy enables users to confidently rely on Generative AI.

Tackling unknown issues
of generative AI

Real world incidents demonstrate common pitfalls of Generative AI
and the gap between public perception and reality.

Misinformation generation

AI can generate convincing yet false information, which may lead to legal or professional repercussions.

Data privacy issue

Information shared with Generative AI is stored on internal servers and can be used to improve their own model unless users opt out, posing great risk on data privacy, leading to many companies and countries.

Bias and ethical concerns

Both underlying training data and human evaluators of language models can become sources of biases.

Our solution

We use AI for good: our multi-tool detects AI-generated texts and automatically checks facts. It is an easy-to-use plugin and it is available on the Google Chrome store.

Fact check for multi sources

Looking up relevant information from multiple sources to validate the response, enhancing the credibility and reliability of generative AI outputs.

Fake news detection

Detecting fake news without corroborating evidence from other reliable information sources.

AI generated text detection

Determine whether an AI generates text in indistinguishable / human style or if it can be directly attributed to a specific generative AI model, e.g., “this text has been generated by GPT”.

New features coming soon!


Khaleeq Aziz CEO

– Co-founder and CEO of Symanto and Dailight
– Board member and Founder of Over 10 AI start-and-scale-ups
– YPO member
– MSc. Forensic Psychology, Aston University

Pablo Serna Lead Research Scientist

– PhD in computational physics
– Former CTO of Chat Ergo Bot
– Developed the backbone using LLM technology.
– Postdoctoral Researcher in Physics at the University of Oxford and ENS Paris

Aurora Cobo Aguilera Research Scientist

– PhD in Machine learning with a focus on NLP
– Former Data scientist at Huawei
– Teacher of NLP at BBVA (Bank)
– Professor of Big Data at Valley Digital Business School

Stuart Winter-Tear Head of Product

– Over 20 years experience in Product Management
– Manager of cross functional teams within a variety of industries including:
– Cyber security, Online gaming, eCommerce
– Bsc in Psycholgy and several certifications in Data science and Machine Learning

Natalia Corazza UX/UI Designer

-8 years experience in Graphic and UX/UI Design
-Professor of Graphic Design for Videogames
-BA in Graphic Design from Buenos Aires University, Argentina
-Cross skilled in multiple design technologies

Alfonso Rodriguez Senior Software Engineer

– Over 6 years experience in Software Engineering
– Specialised as Full stack developer
– Extensive knowledge in Infrastructure maintenance
– BSc in Computer Science from UPV






Vanesa Martinez Community Manager

– Extensive experience in building marketing teams in the Startup world
– Lead award winning company marketing strategy in Spain and UK
– Well versed in social media management and community engagement
– Post Graduate degree in Digital Marketing

Veton Kaso Frontend Developer

– Over 5 years Frontend and Backend experience
– Lead multiple mobile and software application projects
– Former teacher of programming at high school level
– Degree in Computer Science

Francisco Rangel Product Advisor

– Chief product Officer at Symanto
– Former CTO of Autoritas (AI-based digital trasnformation company)
– Ph.D. Computer Science, Technical University of Valencia

James Dooms Programme Advisor

– Over 10 years’ experience in Digital Project and Product management
– Led the implementation of several web and AI based solutions
– Extensive experience in leading projects in the Digital transformation industry
– M.A. Media and Business

Marc Franco Scientific Advisor

– Chief Scientific Officer at Symanto
– Adjunt Professor, Universidad Europea
– Ph.D. Computer Science (Artificial Intelligence), Technical University of Valencia


  • What is LLM?

    A large language model (LLM) is a powerful tool that uses a neural network with billions of parameters to understand and generate human language. It is trained on vast amounts of text data using self-supervised or semi-supervised learning. Recently, the launch of ChatGPT has made LLMs more accessible to the public. Other examples of LLMs include Bert, GPT-3, GPT-4, Chinchilla, etc.. Widely known ChatGPT or Google Bard rely on this core technology to provide their functionalities.

  • Are all Generative AI tools based on the same code/AI?

    No, not all Generative AI tools are based on the same code or AI. There are various models and architectures used in Generative AI, each with its own unique approach and underlying code. For example, different language models like GPT-3, GPT-4, and Bert have their own distinct architectures and codebases.

  • What is a Safe AI Seal?

    Like ISO certifications or TÜV (in Germany), we want to provide AI users a mechanism that helps them ensure the safe usage of AI models. The “Safe AI Seal” will become the standard certification for all AI models to prove its “safety”.

    There will be a collective decision-making over the “Safe AI Seal”, whether a model is safe or not, voted by the AI-Community of Genaios (democratised solution).

Activate Genaios to harness
the power of Generative AI for all.

To stay up to date on our product release — join the waitlist now.