.Make certain being compatible with multiple structures, including.NET 6.0,. Web Framework 4.6.2, and.NET Criterion 2.0 and also above.Lessen reliances to avoid version conflicts and the necessity for binding redirects.Translating Audio Information.Among the key performances of the SDK is actually audio transcription. Developers may transcribe audio files asynchronously or even in real-time. Below is an instance of how to record an audio data:.making use of AssemblyAI.making use of AssemblyAI.Transcripts.var customer = brand new AssemblyAIClient(" YOUR_API_KEY").var transcript = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood data, comparable code can be used to achieve transcription.await using var stream = new FileStream("./ nbc.mp3", FileMode.Open).var transcript = await client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK likewise reinforces real-time sound transcription utilizing Streaming Speech-to-Text. This attribute is actually especially useful for treatments calling for immediate handling of audio data.making use of AssemblyAI.Realtime.await utilizing var scribe = brand new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Last: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for obtaining sound coming from a microphone for instance.GetAudio( async (piece) => await transcriber.SendAudioAsync( chunk)).wait for transcriber.CloseAsync().Taking Advantage Of LeMUR for LLM Functions.The SDK incorporates along with LeMUR to permit designers to build large foreign language model (LLM) functions on voice information. Below is an instance:.var lemurTaskParams = brand new LemurTaskParams.Prompt="Give a short recap of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intellect Models.Also, the SDK possesses built-in support for audio cleverness designs, enabling belief study and various other advanced attributes.var transcript = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// POSITIVE, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To find out more, visit the official AssemblyAI blog.Image source: Shutterstock.