Comparison of the AI models used by nele.ai
Which AI models does nele.ai offer?
nele.ai provides various AI models for generating texts and images as well as for image recognition (vision).
Text-generating AI models at nele.ai (as of August 2024):
Server location Europe
Azure GPT-4o (128k ➜ approx. 96,000 words* per chat) - Trained with data up to October 2023
Azure GPT-4o mini (128k ➜ approx. 96,000 words* per chat) - Trained with data up to October 2023
Azure GPT-4 Turbo (128k ➜ approx 96,000 words* per chat) - Trained with data up to December 2023
Azure GPT-3.5 Turbo (16k ➜ approx. 12,000 words* per chat) - Trained with data up to September 2021
Claude 3.5 Sonnet (200k ➜ approx. 150,000 words per chat) - Trained until August 2023
Claude 3 Haiku (200k ➜ approx. 150,000 words per chat) - Trained until August 2023
Mistral Small (32k ➜ approx. 24,000 words per chat) — training time unknown
Mistral Large (32k ➜ approx. 24,000 words per chat) — training time unknown
Server location USA
GPT-4o (128k ➜ approx. 96,000 words* per chat) - Trained with data up to October 2023
GPT-4o mini (128k ➜ approx. 96,000 words* per chat) - Trained with data up to October 2023
GPT-4 Turbo (128k ➜ approx. 96,000 words* per chat) - Trained with data up to December 2023
GPT-3.5 Turbo (16k ➜ approx. 12,000 words* per chat) - Trained with data until September 2021
Claude 3 Opus (200k ➜ approx. 150,000 words per chat) - Trained until August 2023
Claude 3.5 Sonnet (200k ➜ approx. 150,000 words per chat) - Trained until August 2023
Claude 3 Haiku (200k ➜ approx. 150,000 words per chat) - Trained until August 2023
Image-generating AI models at nele.ai (as of August 2024)
Server location Europe
Azure DALL·E3
Server location USA
OpenAI DALL·E3
Vision (image recognition models) at nele.ai (as of August 2024)
Server location Europe
Azure GPT-4o
Azure GPT-4o mini
Claude 3.5 Sonnet
Claude 3 Haiku
Server location USA
GPT-4o
GPT-4o mini
Claude 3 Opus
Claude 3.5 Sonnet
Claude 3 Haiku
IMPORTANT NOTICE
All content generated by AI must be checked again for accuracy before use.
*Generative AI models are limited by a token limit, where a token is usually a word, part of a word, or a punctuation mark. This limit defines how much text the AI can process or generate in a session. If the interaction reaches this maximum number of tokens, the AI cannot capture any further information without overwriting previous content. The most well-known example of this is GPT-3 Turbo from OpenAI, which has an upper limit of 12,000 tokens. Within this framework, the system must select the most relevant information to continue the conversation effectively.