Our vision

We aim to provide everyone a safeguard for the usage of Generative AI, the most powerful yet potentially dangerous technology today. Our mission is to enable companies and individuals to harness the power of AI.

Generative AI should be

  • Safe
  • Fair
  • Ethical
  • Sustainable
  • Reliable

No one should fear the use of Generative AI.

Using Generative AI fairly ensures objective outcomes for all.

Ethical AI is a top priority when using Generative AI.

Increasing usage of Generative AI means finding new ways to ensure sustainability.

Maintaining accuracy enables users to confidently rely on Generative AI.

Tackling unknown issues
of generative AI

Real world incidents demonstrate common pitfalls of Generative AI
and the gap between public perception and reality.

Misinformation generation

AI can generate convincing yet false information, which may lead to legal or professional repercussions.

Data privacy issue

Information shared with Generative AI is stored on internal servers and can be used to improve their own model unless users opt out, posing great risk on data privacy, leading to many companies and countries.

Bias and ethical concerns

Both underlying training data and human evaluators of language models can become sources of biases.

Our solution

Genaios, the Anti-AI Risks Suite that can be used by individuals and businesses to detect and protect users and the society worldwide from common risks of Generative AI.

Fact check for multi sources

Looking up relevant information from multiple sources to validate the response, enhancing the credibility and reliability of generative AI outputs.

Fake news detection

Detecting fake news without corroborating evidence from other reliable information sources and using specific language style (e.g. emotionally manipulative).

IP violation detection

Identifying when a response of the generative AI is similar or identical to an existing texton the internet. Potentially protected by license or copyright, in order to avoid an IP dispute.

AI generated text detection

Determine whether an AI generates text in indistinguishable / human style or if it can be directly attributed to a specific generative AI model, e.g., “this text has been generated by GPT”.

Hate speech detection

A collection of AI models to identify the harmful content and hateful tonalityin the generated texts.

Stereotype and bias detection

A collection of AI models to identify potential biases, and stereotypesin the generated texts.


Khaleeq Aziz CEO

– Co-founder and CEO of Symanto and Dailight
– Board member and Founder of Over 10 AI start-and-scale-ups
– YPO member
– MSc. Forensic Psychology, Aston University

Pablo Serna Research Engineer

– PhD in computational physics
– Former CTO of Chat Ergo Bot
– Developed the backbone using LLM technology.
– Postdoctoral Researcher in Physics at the University of Oxford and ENS Paris

Aurora Cobo Aguilera Research Scientist

– PhD in Machine learning with a focus on NLP
– Former Data scientist at Huawei
– Teacher of NLP at BBVA (Bank)
– Professor of Big Data at Valley Digital Business School

Alfonso Rodriguez Senior Software Engineer

– Over 6 years experience in Software Engineering
– Specialised as Full stack developer
– Extensive knowledge in Infrastructure maintainence
– Bsc in Computer Science from UPV






Stuart Winter-Tear Head of Product

– Over 20 years experience in Product Management
– Manager of cross functional teams within a variety of industries including:
– Cyber security, Online gaming, eCommerce
– Bsc in Psycholgy and several certifications in Data science and Machine Learning

Francisco Rangel Product Advisor

– Chief product Officer at Symanto
– Former CTO of Autoritas (AI-based digital trasnformation company)
– Ph.D. Computer Science, Technical University of Valencia

James Dooms Programme Director

– Over 10 years’ experience in Digital Project and Product management
– Led the implementation of several web and AI based solutions
– Extensive experience in leading projects in the Digital transformation industry
– M.A. Media and Business

Marc Franco Scientific Advisor

– Chief Scientific Officer at Symanto
– Deep Learning Lecturer, University of Barcelona
– City Lead Valencia AI, Spain AI
– Ph.D. Computer Science (Artificial Intelligence), Technical University of Valencia


  • What is LLM?

    A large language model (LLM) is a powerful tool that uses a neural network with billions of parameters to understand and generate human language. It is trained on vast amounts of text data using self-supervised or semi-supervised learning. Recently, the launch of ChatGPT has made LLMs more accessible to the public. Other examples of LLMs include Bert, GPT-3, GPT-4, Chinchilla, etc.. Widely known ChatGPT or Google Bard rely on this core technology to provide their functionalities.

  • Are all Generative AI tools based on the same code/AI?

    No, not all Generative AI tools are based on the same code or AI. There are various models and architectures used in Generative AI, each with its own unique approach and underlying code. For example, different language models like GPT-3, GPT-4, and Bert have their own distinct architectures and codebases.

  • What is a Safe AI Seal?

    Like ISO certifications or TÜV (in Germany), we want to provide AI users a mechanism that helps them ensure the safe usage of AI models. The “Safe AI Seal” will become the standard certification for all AI models to prove its “safety”.

    There will be a collective decision-making over the “Safe AI Seal”, whether a model is safe or not, voted by the AI-Community of Genaios (democratised solution).

Activate Genaios to harness
the power of Generative AI for all.

To stay up to date on our product release — join the waitlist now.