NeuralSpace Release Notes: What’s new in Version 1.3.0?
We at NeuralSpace have constantly been developing to improve the NeuralSpace Platform and give our users more and more language AI capabilities that we packaged together in our latest release, which we have called Nami. We are excited to share with you the latest updates (sneak peek: speech has finally arrived 😍).
All new users will directly get $200 worth of credits without entering any credit card details. This gives first-time users a good sense of how expensive (wait - it’s actually cheap 🤑) NeuralSpace is. This is easily enough to train around 30 models, deploying them and thoroughly testing them in production.
All existing free account users are now upgraded to an account with $200 worth of credits (if that’s you: congratulations 🥳).
Overall Platform changes
A few general changes have been implemented across the Platform. Previously named Apps NeuraLingo and NeuralAug are now called Languauge Understanding and Augmentation, respectively, so you directly understand what you can do with them.
😵 First big announcement: You can now train models using AutoNLP and deploy them with AutoMLOps for:
Transliteration (supporting 120 language pairs)
Entity Recognition (supporting 80+ languages)
Simply import a dataset, train a model on it, deploy and test it - all in a few clicks 😎
😵 Second big announcement: We are live with Speech to Text in the first 24 languages. Many more are coming over the next few months.
Our Language Understanding service has a new and improved AutoNLP pipeline, in which you can train accurate models even on datasets having more than 200 intents (wow wow wow 😲). To ease the data creation process, we have added a few new features to our Data Studio such as phonetic typing which you can use to create datasets in languages like Arabic, Hindi, Tamil, Telugu, Chinese, etc. that do not use the Latin/English alphabet.
You can now create up to 1000 projects from any account and add up to 1 million examples per project. Datasets can be uploaded in json, csv, or Rasa YML format directly to NeuralSpace. You can also choose from our 50+ existing datasets and directly import them into your project (many more to come soon).
🌍 We have also built a completely new multilingual parse feature that you can use to train a Language Understanding model in any one language and use it in 100+ languages. It also supports auto language detection, auto entity recognition and auto intent detection in more than 100 languages. If you are building a multilingual chatbot but only have training data in one language then this is exactly what you need.
➕ Additional updates:
In the Data Studio, in case you don’t have enough examples to train a model (our models require 10 or more examples per intent) you can now see which intents need extra examples
Small bugs for right-to-left (RTL) languages like Arabic, Farsi and Hebrew are now fixed
You can train Entity Recognition models in more than 80 languages using AutoNLP, deploy and scale them with AutoMLOps with APIs and our no-code user interface. To add data to your project either create your own datasets using the Data Studio or choose from our 15+ existing datasets that can be directly imported. Uploading your existing datasets is possible too, of course. Easy breezy! 💁♀️
🚀 Phonetic typing is also implemented in the Data Studio for Entity Recognition. Datasets can now be uploaded in json, csv, or Rasa YML format directly on the Platform.
You can train Transliteration models in more than 80 languages using AutoNLP, deploy and scale them with AutoMLOPs. To add data to your project either create your own datasets using the Data Studio or choose from our 15+ existing datasets that can be directly imported. Piece of 🍰?
🚀 To create datasets faster in languages like Arabic, Hindi, etc, use the new phonetic typing feature in the Data Studio. Datasets can now be uploaded in json, csv, or Rasa YML format directly on the Platform.
Our Machine Translation service is now able to translate with annotations. 😉 You can simply pass text and entities with start and end index in a source language and get the text as well as entities translated with start and end index in the target language. Use it to bootstrap your Language Understanding or Entity Recognition datasets for chatbots 💬 in 100+ languages!
🌐 We have expanded our language support with 11 new languages: Twi, Tsonga, Tigrinya, Quechua, Oromo, Lingala, Guarani, Dhivehi, Bambara, Aymara and Assamese.
Check out the full language support.
Our Augmentation service now supports annotated translation. All you need to do is pass text and entities with start and end index and get up to 10 semantically similar sentences with entities. 💥 Use it to 10x your dataset size and get improved results for your Language Understanding and Entity Recognition models.
Speech to Text (Preview)
💃🏽 Last but definitely not least, our pre-trained Speech-to-Text models are now live for 24 languages and various domains (medical, financial, etc.). You can try out the Speech-to-Text service by either uploading an audio file (File Transcription) in any format or using the Dictation tool 🗣️ to get live transcriptions.
🌐 The supported languages are:
Hindi, Odia, Chinese, Japanese, Kazakh, Russian, Filipino (Tagalog), Vietnamese, Arabic, Farsi (Persian), Czech, Dutch, English, Esperanto, French, German, Greek, Italian, Portuguese, Russian, Spanish, Swedish, Turkish & Ukrainian.
We will add many more languages over the next weeks. Please get in touch with us if you have any preferences!
If you haven’t yet, sign-up on the NeuralSpace Platform to try and test it out by yourself! Get started with $200 worth of credits.
Join the NeuralSpace Slack Community to connect with us. Also, receive updates and discuss topics in NLP for low-resource languages with fellow developers and researchers.
Check out our Documentation to read more about the NeuralSpace Platform and its different services.