©Power Trio LLC. All Rights Reserved.
Patrick says:
One of the hassles I’ve had with Logic is that it will silently mute some tracks if it thinks that there’s no audio to process. For the Music IO plug in — the track that it’s on will need to have some audio input, and Logic will need to believe that it’s active. If you’re not using a FX loop, one of the most reliable tricks is to put an audio loop (it can be anything) onto the track.
When you record, the first track (with the Music IO plug) would play back — but instead of the audio from the recording, you would get the audio coming in from the iPad (the plug-in replaces the audio from the track with audio from USB).
To give an analogy — in some respects, an audio track in Logic is like a conveyer belt. The audio from an interface gets put onto the belt, and then that is moved along past each effects app, until the final version comes out at the bottom of the track. If Logic thinks that there’s no new input coming into the track, it stops the conveyer belt completely — none of the effects apps (in this case, Music IO) get a chance to do anything.
It makes some sense why Logic does this — if there’s no audio in, then they can save CPU time by not doing anything. Of the DAWs I’ve seen, Logic is the most aggressive on doing things like this (and gives no indication that it’s going on). musicIO is positioned in an unusual way, and my guess is that this is what’s happening to you.
A simple thing to try would be to simply disable Music IO on the track — if the input to the track does not bounce to the second track (where you record), then the problem is likely that Logic does not think that the track is active. You can check your audio inputs, perhaps select the track before recording, and so on. If it does record (and toggling Music IO off, and then back on, does not solve the problem), then we may have something more tricky.
For Version 1.50 (Feb 2016): Runs on iOS 7 or better, 10.7+ on OS X, and Windows Vista/7/8/10
iOS
OSX
WINDOWS
Videos pertaining to musicIO, check back as we’ll update this page periodically…
Version 1.50
3 Tutorials on using Ableton and musicIO
Version 1.30
Version 1.30 FX Loops with Ableton Live (08-May-2015)
Version 1.20
Version 1.20 Demonstration (19-Apr-2015)
Audio Update Promo Video (27-Mar-2015)
The Sound Test Room’s MIDI overview (05-Mar-2015)
Robin Fitton’s Music IO iPad Overview (04-Aug-2015)
What is the latency for audio? Jitter? Glitches?
Audio latency depends on a number of factors; this article will go into a bit of technical detail, to give the best understanding of the abilities and limitations of the system. With a bit of careful planning, it should be possible to achieve excellent results with the Music IO app.
The iOS Audio System
Audio on an iOS device is processed in individual slices of time. One could think of this as a conveyer belt, where a fraction of a second of sound is moved from one point to another. Audio levels are measured at 44,100 samples per second; this fast enough to capture audio frequencies that are well above the range of human hearing (in fact, roughly twice the frequency that most people can hear). These 44,100 samples per second are then broken into a number of smaller buffers; 256 samples is a common buffer size, and corresponds to about 5.6 milliseconds of audio.
With the iOS audio system, there is typically a single buffer being recorded, a buffer of audio being generated by a synth app, and a buffer of audio being played out of the speaker or headphones. When one 5.6 millisecond period ends, the recorded audio moves to a synthesizer app for processing, the audio processed by the synthesizer moves to the being available for play from the speaker, and a new 5.6 segment of audio begins to be recorded.
When all of the components finish their work in time, the audio system works like clockwork. One buffer at record, a second one being processed, and a third being played. If one of the buffers is not ready in time (this is particularly likely to occur if the synthesizer app is doing a complex computation), the audio may not be ready in time to play back, and you can hear an audio glitch.
The size of the buffer is an important parameter. 256 samples corresponds to 5.6ms, 512 samples is about 11ms, and 128 samples is 2.8ms. Smaller buffers can make the audio system more responsive — there’s less time to wait between an audio buffer being processed, and it becoming available to play out through the speaker. Smaller buffers also increase the computing load, though — finding the best buffer size is a trade-off between the complexity of the processing, the CPU speed of the iOS device, and the need for low latency.
The initial release of audio for Music IO uses a default buffer size of 256 samples. This is large enough to have good performance on most recent iOS hardware, while also being small enough for the system to be responsive.
Transfer of Audio across USB
Music IO uses Inter-App Audio directly, to get access to audio buffers as soon as the iOS operating system allows. The buffer size will impact this latency, but in practice, there will at least a 5.6ms delay. Once audio is captured by the iOS version of Music IO, it is packaged and sent across the USB connection to the Mac host immediately.
Depending on the amount of other data being sent across the USB connection, the audio buffer may be delayed slightly (and there is some required processing to enable delivery of the audio buffer to the Mac OS X version of Music IO).
Because there can be slight variations in the time that an audio buffer is delivered, Music IO on the Mac performs a small amount of buffering; this is typically one or two buffers worth of data, or between 5.6ms and 11.2ms.
The OS X Audio System
As with the iOS audio system, the OS X system also works with buffering. We recommend using the same buffer sizes throughout the audio chain; the Mac version of Music IO defaults to buffers of 256 samples.
The Mac version of Music IO can direct the audio to any audio output destination (including aggregate devices). This can add another 5.6ms of latency.
An important issue to note — while both the iOS and OS X audio systems can be viewed as conveyer belts, moving buffers of audio from input to output, the two systems may not be perfectly aligned. The alignment may impact the effective latency.
Connecting iOS Audio to an OS X DAW
On OS X, Apple recommends using kernel audio drivers to send audio from one user application to another. A kernel driver can be integrated directly with the operating system, providing lower latency audio, and lower computing demands — systems that rely on user-space software may not have access to real time threads, and can thus have higher latency.
There are a number of kernel drivers available that will enable audio to be sent into a desktop DAW. One system which we recommend is SoundFlower, a free open-source driver developed by the team at Cycling ’74. When installed, SoundFlower provides a virtual audio interface; Music IO can send audio from the iOS device to the SoundFlower port, and any desktop DAW can receive the audio from that port.
In many respects, SoundFlower operates in the same manner as Inter-App Audio on iOS; it is a conduit to transfer audio from one place to another. There are also commercial applications that achieve the same sort of behavior (for example Sound Siphon).
In our experiments, the end-to-end latency of Music IO is in the range of 20 to 30ms in most cases. This includes the following steps.
MIDI notes sent from Logic X on the Mac
MIDI transferred to an iPad using Music IO
MIDI delivered to an iOS app (BS-16i in our experiments, with a piano patch)
Audio captured from BS-16i by Music IO
Audio transferred back from iOS to the Mac
Music IO on the Mac delivers the audio to SoundFlower
Logic X on the Mac records the audio from SoundFlower onto an audio track
We have worked primarily with 256 sample buffers; using smaller buffers reduces the effective latency. The default of 256 provides good performance for most users in most situations; we will add configuration options in an upcoming release, to allow experienced users to customize their setup.
Sending OS X Audio to iOS
Audio from OS X to iOS is not supported in v1.1 of Music IO. We will add this feature in an upcoming release.
Direct Monitoring
In any situation, it should be possible to monitor audio directly from the iOS device (rather than through the DAW). Experienced musicians are likely familiar with “direct monitoring.”
Latency Compensation with a DAW
To achieve nearly perfect timing of recorded audio, use latency compensation at the DAW. Most DAWs provide a simple method to shift recordings a few milliseconds backwards or forwards in time, to offset for any latency in the recording process.
Across a wide range of Mac systems, and with a number of different iOS devices, latency is typically just under 1ms for a note-on or note-off message to be delivered. We have not been able to measure any significant jitter.
This performance does depend on the work load on both the iOS device and OS X host, as well as the traffic flowing across the USB connection. If the Mac host is involved in a compute intensive task (indexing mail boxes, for example), or the USB connection is performing a sync to iTunes, latency and jitter will obviously be higher.
In typical use cases, however, we expect that both latency and jitter will be small enough that they will be difficult to detect, and will not impact music making negatively.
First off, make sure you are running the latest versions of both the Mac and the iOS client.
If the Mac app is still crashing, we’ll need your crash logs. These logs are generated when the App stops abruptly, and you’ll see a prompt like this:
Please email support@musicioapp.com and include the crash report so we can figure out what is going on.
No. Music IO will faithfully translate any clock signals passing through it, although it has special ‘smarts’ for accurately matching the mac end with any app on iOS sending using the ‘MidiBus’ clock (see http://midib.us for apps) as master.
As of writing, you can monitor any/all iOS audio sources via the plugin architecture by inserting musicIO plugins into your DAW and monitoring there.
For direct audio (via soundflower) only one iOS audio source can be monitored at any one time.
Message from user
I am writing to get help from someone who is versed in Logic Pro X with respect to Music IO. I am able to get midi in and out of my Mac and iPad as well as record midi in a midi track, but no matter what I have tried, I cannot record audio. I can see audio when I select “input monitoring”, but when I arm record, I see nothing. I will say that I am a real noob when It comes to Logic Pro X but have taken a few tutorial courses about the basics.
Here is my setup:
I have a 2008 Mac Pro running El Capitan
I have an Apogee Duet plugged into USB and sending audio to an external speaker system.
I have a Novation Launchkey keyboard plugged into USB which is able to play synths on my iPad and Mac.
I have an iPad Air 2 plugged into USB and running various synth apps like Moog Model 15, Animoog and others.I did watch one of the few YT videos titled “Music IO 1.20 — MIDI and Audio over USB” however the video did not cover ALL the steps for setting up the tracks on Logic as It pertains to Music IO. If I connect an audio cable from the iPad headphone out to the Apogee Duet line in, I am able to record that audio, but I thought that is what Music IO is supposed to do for me?
Any help would be greatly appreciated.
In Absolute Gratitude
Message from Patrick
I’m jumping in on this, as I’m the resident Logic guy. Hopefully, we can get you sorted out shortly (I’m out of town starting tomorrow, with limited email access!).
The key things in Logic:
1) Install the AU plug-in. This should have been part of the DMG disk image that you grabbed, when getting the musicIO server. All you should need to do is drag the plug-in into the folder (and there are links in the DMG itself — should be a breeze).2) In Logic — create an audio track. There’s a section for inserting effects, and you should see the musicIO plug-in as one of your options. Select that, and you should see a little pop-up window for configuring things. The app can stream four tracks of audio from the iPad to the Logic…
3) On the iPad — in musicIO, there’s a tab where you can insert IAA audio generators (synths); drop one in there, and audio from that should stream across USB, through the musicIO server, and into the AU plug-in.
4) Tricky part #1. Logic normally captures audio at the “input” of the track — for example, a raw, unprocessed guitar signal. Everything that happens after that (amp simulators, for example) is *not* recorded. This throws a bit of a monkey wrench into the mix, as the musicIO plug-in is sitting in the effects slot. So…..
4a) Create a second audio track. For the input of that track, use Bus 1.
4b) On the first track, use a “send” (at the bottom of the track) to route the output to Bus 1. There’s a little green volume dial that you’ll need to adjust, to bring the level up.
What’s going to happen is that the audio will come from the iPad, out through the musicIO plug-in on the first track (at the effects stage of the processing), and then get routed into the second audio track input, over bus 1. Because the audio is coming into the input stage of the second track, you can record it there.
A bit tricky — but once you’ve done it, it should start to make sense. The bus routing is a standard trick in modern DAWs.5) Tricky part #2. Logic will shut down audio processing on anything that it thinks is silent. So, on the first track, make sure you have an audio input (either an external audio, or a recorded loop). If Logic thinks that there’s no sound coming to the input, it stops all processing on the entire track — and any audio that would come from the plug-in gets ignored. The plug-in itself will give you an “audio muted” warning if it detects that happening.
—-
Cool so far? Once you’ve got that, you can add three other channels and synths if you like. Each iOS device can host four separate channels (and each channel can have four audio generators, and four effects apps — probably much more than your iPad could actually handle).Anyway, hopefully this will get you rolling! Let me know if you have trouble.
Best regards,
Patrick
Response from User
Hi Patrick,
Thank you for your fantastic help. Because of your instructions, I was able to get it working!
I did notice that when I set it up as you suggested, in order to record anything, I had to turn
on “input monitoring” on the first audio track or else nothing showed up in the 2nd track. I also
had to throw a loop in the first track to keep the Music IO plugin from being muted by LPX.In Absolute Gratitude
There are a few common issues when setting up musicIO. The musicIO is a version-2 VST; not all DAWs support this standard (in particular, Reason). Older DAWs may also not support this standard.
Check to make sure that you have placed the VST in the correct folder for your DAW. You may also try switching to the 32 or 64-bit VSTs; we have heard some reports of trouble with Cubase and the 64-bit plug-in, where the 32-bit plug was able to load and operate without problems.
Also be sure that you have installed LoopMIDI; the current VST plug-in will not load correctly if LoopMIDI has not been installed.
There are a few possible causes for this problem. First and foremost, make sure that you have a current version of iTunes installed; the musicIO plug-in uses some system libraries that come with iTunes. You do not need to be running iTunes while using musicIO — but if iTunes can’t detect your iOS device, it’s unlikely that musicIO will be able to connect successfully.
If iTunes is installed, but the plug-in is not able to connect, please try a few different USB ports, and avoid the use of a USB hub. In one case, a connection problem was solved by switching to an Apple-branded USB sync cable. musicIO requires a high-speed data connection, and the USB port in use, and intermediate hubs can be a source of trouble.
Finally, if you have a network firewall enabled, check to make sure that your DAW is allowed network access. The connection from the VST plug-in to the iOS device is considered a “network” connection by Windows — and a firewall may block this connection.
If you are new to musicIO, and need to set up a PC, there are a few steps to follow.
iTunes for Windows is required for musicIO to operate; there are a few system utilities that are part of the iTunes installation package. Note that iTunes does not need to be running to use musicIO; it simply needs to be installed.
LoopMIDI is a free utility application that allows programs on Windows to communicate using virtual MIDI ports. musicIO uses a SDK that was developed by the LoopMIDI authors, and the LoopMIDI installer adds a few required system files for musicIO to operate. You do not need to be actively running LoopMIDI for musicIO to operate — but it does need to be installed.
The installation location for the VST will vary with your DAW. More details for a variety of DAWs are given below.
Installation details for Reaper
Ableton Live
Typical problems in getting musicIO running are covered here.
To configure musicIO for Reaper, you will need to first copy the VST into the plug-in folder. You can check for your default plug-in locations by opening Preferences, and then moving to the Plug-In/VST tab. Copy the plug-in to the indicated directory (or add a new directory location if you like). Press “re-scan,” or restart Reaper.
After you have installed the VST, you can add it to an effects slot on a track. If you have musicIO running on your iOS device, and connect the USB cable, the VST plug-in will search for a moment, and then should indicate a successful connection. If you do not get a connection shortly after connecting the cable, check the PC troubleshooting section.
Audio on a track on your iOS device will stream across the USB cable, and effectively be the “output” of the VST effect plug-in. If you wish to process audio on iOS (sending from your Reaper track, through an effects app, and then back to your PC), toggle on the FX Loop button.
Version 1.50 of musicIO uses a virtual MIDI SDK that has been provided by the authors of LoopMIDI. If you have LoopMIDI installed, that’s all you need to do; the VST will create virtual MIDI ports automatically. To use them in Reaper, simply go to the MIDI Devices tab under preferences, and toggle on musicIO MIDI (input and/or output, depending on your needs).
Because the virtual MIDI ports are created after the VST starts, you may need to refresh your MIDI devices to see them appear.
Problem: MusicIO32.dll causes Sound Forge 10 to hang on
its VST scan.
Solution: Remove it from the VST folder in order to use
Sound Forge 10.
This is straight from a support ticket with a user:
two solutions for Sonar (very different from Ableton and Logic):
Hardware : output the track with the plugin to a free pair of the soundcard, then come back in another track with this free pair to record.
Software : “The plugin requires two audio tracks. The first track physically hosts the plugin in the effects section. Because the audio is entering the signal path at the insert stage, not the track input stage, you can’t record directly to the track where the plugin resides, as there is no input to record. This is where the second track comes in. You need to route the output of the first track to the input of the second track then record on track two. Sonar can’t do this, as it doesn’t allow for tracks to output to other tracks, only buses. The workaround is to use the ‘Bounce to Tracks’ feature. Solo the track with the plugin instantiated, then bounce with the Main Outputs or Entire Mix set as your source and the second track you created as the destination.” Thanks to Reactorstudios on Sonar’s forum
To listen to a result of this cooking: https://soundcloud.com/kitusai/bawu
PDF from user about Sonar with 1.41 of musicIO: Sonar Platinum and musicIO
For Version 1.50 (Feb 2016): Runs on iOS 7 or better, 10.7+ on OS X, and Windows Vista/7/8/10
iOS
OSX
WINDOWS
Videos pertaining to musicIO, check back as we’ll update this page periodically…
Version 1.50
3 Tutorials on using Ableton and musicIO
Version 1.30
Version 1.30 FX Loops with Ableton Live (08-May-2015)
Version 1.20
Version 1.20 Demonstration (19-Apr-2015)
Audio Update Promo Video (27-Mar-2015)
The Sound Test Room’s MIDI overview (05-Mar-2015)
Robin Fitton’s Music IO iPad Overview (04-Aug-2015)
What is the latency for audio? Jitter? Glitches?
Audio latency depends on a number of factors; this article will go into a bit of technical detail, to give the best understanding of the abilities and limitations of the system. With a bit of careful planning, it should be possible to achieve excellent results with the Music IO app.
The iOS Audio System
Audio on an iOS device is processed in individual slices of time. One could think of this as a conveyer belt, where a fraction of a second of sound is moved from one point to another. Audio levels are measured at 44,100 samples per second; this fast enough to capture audio frequencies that are well above the range of human hearing (in fact, roughly twice the frequency that most people can hear). These 44,100 samples per second are then broken into a number of smaller buffers; 256 samples is a common buffer size, and corresponds to about 5.6 milliseconds of audio.
With the iOS audio system, there is typically a single buffer being recorded, a buffer of audio being generated by a synth app, and a buffer of audio being played out of the speaker or headphones. When one 5.6 millisecond period ends, the recorded audio moves to a synthesizer app for processing, the audio processed by the synthesizer moves to the being available for play from the speaker, and a new 5.6 segment of audio begins to be recorded.
When all of the components finish their work in time, the audio system works like clockwork. One buffer at record, a second one being processed, and a third being played. If one of the buffers is not ready in time (this is particularly likely to occur if the synthesizer app is doing a complex computation), the audio may not be ready in time to play back, and you can hear an audio glitch.
The size of the buffer is an important parameter. 256 samples corresponds to 5.6ms, 512 samples is about 11ms, and 128 samples is 2.8ms. Smaller buffers can make the audio system more responsive — there’s less time to wait between an audio buffer being processed, and it becoming available to play out through the speaker. Smaller buffers also increase the computing load, though — finding the best buffer size is a trade-off between the complexity of the processing, the CPU speed of the iOS device, and the need for low latency.
The initial release of audio for Music IO uses a default buffer size of 256 samples. This is large enough to have good performance on most recent iOS hardware, while also being small enough for the system to be responsive.
Transfer of Audio across USB
Music IO uses Inter-App Audio directly, to get access to audio buffers as soon as the iOS operating system allows. The buffer size will impact this latency, but in practice, there will at least a 5.6ms delay. Once audio is captured by the iOS version of Music IO, it is packaged and sent across the USB connection to the Mac host immediately.
Depending on the amount of other data being sent across the USB connection, the audio buffer may be delayed slightly (and there is some required processing to enable delivery of the audio buffer to the Mac OS X version of Music IO).
Because there can be slight variations in the time that an audio buffer is delivered, Music IO on the Mac performs a small amount of buffering; this is typically one or two buffers worth of data, or between 5.6ms and 11.2ms.
The OS X Audio System
As with the iOS audio system, the OS X system also works with buffering. We recommend using the same buffer sizes throughout the audio chain; the Mac version of Music IO defaults to buffers of 256 samples.
The Mac version of Music IO can direct the audio to any audio output destination (including aggregate devices). This can add another 5.6ms of latency.
An important issue to note — while both the iOS and OS X audio systems can be viewed as conveyer belts, moving buffers of audio from input to output, the two systems may not be perfectly aligned. The alignment may impact the effective latency.
Connecting iOS Audio to an OS X DAW
On OS X, Apple recommends using kernel audio drivers to send audio from one user application to another. A kernel driver can be integrated directly with the operating system, providing lower latency audio, and lower computing demands — systems that rely on user-space software may not have access to real time threads, and can thus have higher latency.
There are a number of kernel drivers available that will enable audio to be sent into a desktop DAW. One system which we recommend is SoundFlower, a free open-source driver developed by the team at Cycling ’74. When installed, SoundFlower provides a virtual audio interface; Music IO can send audio from the iOS device to the SoundFlower port, and any desktop DAW can receive the audio from that port.
In many respects, SoundFlower operates in the same manner as Inter-App Audio on iOS; it is a conduit to transfer audio from one place to another. There are also commercial applications that achieve the same sort of behavior (for example Sound Siphon).
In our experiments, the end-to-end latency of Music IO is in the range of 20 to 30ms in most cases. This includes the following steps.
MIDI notes sent from Logic X on the Mac
MIDI transferred to an iPad using Music IO
MIDI delivered to an iOS app (BS-16i in our experiments, with a piano patch)
Audio captured from BS-16i by Music IO
Audio transferred back from iOS to the Mac
Music IO on the Mac delivers the audio to SoundFlower
Logic X on the Mac records the audio from SoundFlower onto an audio track
We have worked primarily with 256 sample buffers; using smaller buffers reduces the effective latency. The default of 256 provides good performance for most users in most situations; we will add configuration options in an upcoming release, to allow experienced users to customize their setup.
Sending OS X Audio to iOS
Audio from OS X to iOS is not supported in v1.1 of Music IO. We will add this feature in an upcoming release.
Direct Monitoring
In any situation, it should be possible to monitor audio directly from the iOS device (rather than through the DAW). Experienced musicians are likely familiar with “direct monitoring.”
Latency Compensation with a DAW
To achieve nearly perfect timing of recorded audio, use latency compensation at the DAW. Most DAWs provide a simple method to shift recordings a few milliseconds backwards or forwards in time, to offset for any latency in the recording process.
Across a wide range of Mac systems, and with a number of different iOS devices, latency is typically just under 1ms for a note-on or note-off message to be delivered. We have not been able to measure any significant jitter.
This performance does depend on the work load on both the iOS device and OS X host, as well as the traffic flowing across the USB connection. If the Mac host is involved in a compute intensive task (indexing mail boxes, for example), or the USB connection is performing a sync to iTunes, latency and jitter will obviously be higher.
In typical use cases, however, we expect that both latency and jitter will be small enough that they will be difficult to detect, and will not impact music making negatively.
User Scott Harris reports:
I “wrapped” the Musicio 32bit plugin using jbridge which is a program which allows 32bit vsts plugins to work with 64bit DAWs. (Cubase etc)… What it does is wrap the 32bit musicio plugin and tells the DAW that it is 64bit and SDK 2.4. So in fact converts your 32bit plugin into a 64bit plugin and and the wrapper reports the SDK as version 2.4… It is working. I want to be clear the 32bit or 64bit version of your plugin from any of your archives will not work in Cubase 8 and above The only way to get it to work is to wrap the 32bit plugin with jbridge and then it will scan and be available to Cubase 8x a- 64bit. By way of background Cubase has a 32bit bridger built in but it doesn’t work with your plugin or many others because it doesn’t update the SDK … jbridge can do that.
I successfully tested this workaround and I am streaming audio and sending midi clock to musicio.. IOS apps such as animoog, electribe, caustic and others are working.
This is a workaround for Cubase 8 and above (64bit only) users provided they can wrap the plugin using jbridge., There is no other way to get it to scan.In short …. Musicio 64 bit or 32 bit is not seen by Cubase 8x and above
Workaround… bridge the 32bit version use jbridge and it will be available to Cubase 8 and above – 64bit only.Here is a link to jrbridge…. https://jstuff.wordpress.com/jbridge/
A user writes
I can’t seem to get Music IO to start DM1, it gives me a load fail error or so. However if I start dm1 and then try to open dm1 from music io, it work
For Version 1.50 (Feb 2016): Runs on iOS 7 or better, 10.7+ on OS X, and Windows Vista/7/8/10
iOS
OSX
WINDOWS
Videos pertaining to musicIO, check back as we’ll update this page periodically…
Version 1.50
3 Tutorials on using Ableton and musicIO
Version 1.30
Version 1.30 FX Loops with Ableton Live (08-May-2015)
Version 1.20
Version 1.20 Demonstration (19-Apr-2015)
Audio Update Promo Video (27-Mar-2015)
The Sound Test Room’s MIDI overview (05-Mar-2015)
Robin Fitton’s Music IO iPad Overview (04-Aug-2015)
What is the latency for audio? Jitter? Glitches?
Audio latency depends on a number of factors; this article will go into a bit of technical detail, to give the best understanding of the abilities and limitations of the system. With a bit of careful planning, it should be possible to achieve excellent results with the Music IO app.
The iOS Audio System
Audio on an iOS device is processed in individual slices of time. One could think of this as a conveyer belt, where a fraction of a second of sound is moved from one point to another. Audio levels are measured at 44,100 samples per second; this fast enough to capture audio frequencies that are well above the range of human hearing (in fact, roughly twice the frequency that most people can hear). These 44,100 samples per second are then broken into a number of smaller buffers; 256 samples is a common buffer size, and corresponds to about 5.6 milliseconds of audio.
With the iOS audio system, there is typically a single buffer being recorded, a buffer of audio being generated by a synth app, and a buffer of audio being played out of the speaker or headphones. When one 5.6 millisecond period ends, the recorded audio moves to a synthesizer app for processing, the audio processed by the synthesizer moves to the being available for play from the speaker, and a new 5.6 segment of audio begins to be recorded.
When all of the components finish their work in time, the audio system works like clockwork. One buffer at record, a second one being processed, and a third being played. If one of the buffers is not ready in time (this is particularly likely to occur if the synthesizer app is doing a complex computation), the audio may not be ready in time to play back, and you can hear an audio glitch.
The size of the buffer is an important parameter. 256 samples corresponds to 5.6ms, 512 samples is about 11ms, and 128 samples is 2.8ms. Smaller buffers can make the audio system more responsive — there’s less time to wait between an audio buffer being processed, and it becoming available to play out through the speaker. Smaller buffers also increase the computing load, though — finding the best buffer size is a trade-off between the complexity of the processing, the CPU speed of the iOS device, and the need for low latency.
The initial release of audio for Music IO uses a default buffer size of 256 samples. This is large enough to have good performance on most recent iOS hardware, while also being small enough for the system to be responsive.
Transfer of Audio across USB
Music IO uses Inter-App Audio directly, to get access to audio buffers as soon as the iOS operating system allows. The buffer size will impact this latency, but in practice, there will at least a 5.6ms delay. Once audio is captured by the iOS version of Music IO, it is packaged and sent across the USB connection to the Mac host immediately.
Depending on the amount of other data being sent across the USB connection, the audio buffer may be delayed slightly (and there is some required processing to enable delivery of the audio buffer to the Mac OS X version of Music IO).
Because there can be slight variations in the time that an audio buffer is delivered, Music IO on the Mac performs a small amount of buffering; this is typically one or two buffers worth of data, or between 5.6ms and 11.2ms.
The OS X Audio System
As with the iOS audio system, the OS X system also works with buffering. We recommend using the same buffer sizes throughout the audio chain; the Mac version of Music IO defaults to buffers of 256 samples.
The Mac version of Music IO can direct the audio to any audio output destination (including aggregate devices). This can add another 5.6ms of latency.
An important issue to note — while both the iOS and OS X audio systems can be viewed as conveyer belts, moving buffers of audio from input to output, the two systems may not be perfectly aligned. The alignment may impact the effective latency.
Connecting iOS Audio to an OS X DAW
On OS X, Apple recommends using kernel audio drivers to send audio from one user application to another. A kernel driver can be integrated directly with the operating system, providing lower latency audio, and lower computing demands — systems that rely on user-space software may not have access to real time threads, and can thus have higher latency.
There are a number of kernel drivers available that will enable audio to be sent into a desktop DAW. One system which we recommend is SoundFlower, a free open-source driver developed by the team at Cycling ’74. When installed, SoundFlower provides a virtual audio interface; Music IO can send audio from the iOS device to the SoundFlower port, and any desktop DAW can receive the audio from that port.
In many respects, SoundFlower operates in the same manner as Inter-App Audio on iOS; it is a conduit to transfer audio from one place to another. There are also commercial applications that achieve the same sort of behavior (for example Sound Siphon).
In our experiments, the end-to-end latency of Music IO is in the range of 20 to 30ms in most cases. This includes the following steps.
MIDI notes sent from Logic X on the Mac
MIDI transferred to an iPad using Music IO
MIDI delivered to an iOS app (BS-16i in our experiments, with a piano patch)
Audio captured from BS-16i by Music IO
Audio transferred back from iOS to the Mac
Music IO on the Mac delivers the audio to SoundFlower
Logic X on the Mac records the audio from SoundFlower onto an audio track
We have worked primarily with 256 sample buffers; using smaller buffers reduces the effective latency. The default of 256 provides good performance for most users in most situations; we will add configuration options in an upcoming release, to allow experienced users to customize their setup.
Sending OS X Audio to iOS
Audio from OS X to iOS is not supported in v1.1 of Music IO. We will add this feature in an upcoming release.
Direct Monitoring
In any situation, it should be possible to monitor audio directly from the iOS device (rather than through the DAW). Experienced musicians are likely familiar with “direct monitoring.”
Latency Compensation with a DAW
To achieve nearly perfect timing of recorded audio, use latency compensation at the DAW. Most DAWs provide a simple method to shift recordings a few milliseconds backwards or forwards in time, to offset for any latency in the recording process.
Music IO is designed to work with Inter-App Audio; it does not have direct support for Audiobus. Note that recently released “Audiobus 2.0” apps are usually compatible with both Audiobus and Inter-App Audio; Audiobus 2.0 uses IAA as the underlying connection structure. Thus, in many cases, an Audiobus-compatible app can be used with Music IO (although the Audiobus connection panel will not be available).
Synthesizer and effect apps that support IAA can be added directly in the Music IO iOS interface; at present, up to four sound generation apps, and one effect app, can run simultaneously. The sound from these apps is mixed together, and sent across the USB connection to the Mac OS X host.
Future updates will increase the number of separate audio tracks available, the number of effects apps that can be run in series on each track, and will also have support for multiple separate audio channels. The ultimate limit on the number of apps that can run at any given time will be determined by the type of processor on the iOS device (newer iOS devices are faster), and the computing demands of each individual app (there is wide variation in this).
The Music IO team opted to use Inter-App Audio, as it simplifies configuration, avoiding the need for a second app to configure audio routing. By minimizing the number of apps needed to provide both audio and MIDI connections, we can reduce the compute and memory resources required for this task — freeing up more processing power for sound generation.
Direct support for Inter-App Audio also enables the use of high resolution audio formats. Audio with Audiobus is limited to 16-bit samples; Music IO uses 32-bit floating point samples throughout the signal chain (some apps may use lower resolution samples; upscaling happens automatically).
Across a wide range of Mac systems, and with a number of different iOS devices, latency is typically just under 1ms for a note-on or note-off message to be delivered. We have not been able to measure any significant jitter.
This performance does depend on the work load on both the iOS device and OS X host, as well as the traffic flowing across the USB connection. If the Mac host is involved in a compute intensive task (indexing mail boxes, for example), or the USB connection is performing a sync to iTunes, latency and jitter will obviously be higher.
In typical use cases, however, we expect that both latency and jitter will be small enough that they will be difficult to detect, and will not impact music making negatively.
This can happen for two reasons.
1. Too Many Instruments
In version 1.10 of Music IO for iOS, if you attempt to load more than four instruments you’ll see this message.
2. True Load Failure
Sometimes, IAA starts an instrument or effect and it’s running in the background, invisible in the App Switcher. This seems to happen mostly when an IAA app is already running when Music IO is started.
If you can see the instrument or effect in the App Switcher, just terminate it.
If you cannot see it in the App Switcher, start the instrument or effect yourself manually and then terminate it.
If this does not cure the problem, a Restart of your iOS device should definitely do the trick.
No. Music IO will faithfully translate any clock signals passing through it, although it has special ‘smarts’ for accurately matching the mac end with any app on iOS sending using the ‘MidiBus’ clock (see http://midib.us for apps) as master.
Symptom
musicIO crashes when user tries to open an audio instrument or effect on iPhone 6S.
Devices Affected
iPhone 6S, iPhone 6S+
Explanation
Apple made some changes to audio on the iPhone 6S and 6S+.
The internal speaker on the iPhone 6S models only support a sample rate of 48kHz while previous iPhone models supported a collection of sample rates.
All prior iOS devices support 44.1kHz processing; musicIO and many music apps have been optimized for this. musicIO currently lacks support for 48kHz audio, preventing the app from operating correctly on 6S hardware. We plan to adapt our app to this hardware shortly.
Solution (Updated 2016-07-15)
This problem does not occur if the iPhone has headphones (or similar) connected to the headphone jack. Plugging in headphones will switch the sample rate and allow musicIO to work.
Thanks for your patience,
Nic, Patrick and Dan
Message from user
I am writing to get help from someone who is versed in Logic Pro X with respect to Music IO. I am able to get midi in and out of my Mac and iPad as well as record midi in a midi track, but no matter what I have tried, I cannot record audio. I can see audio when I select “input monitoring”, but when I arm record, I see nothing. I will say that I am a real noob when It comes to Logic Pro X but have taken a few tutorial courses about the basics.
Here is my setup:
I have a 2008 Mac Pro running El Capitan
I have an Apogee Duet plugged into USB and sending audio to an external speaker system.
I have a Novation Launchkey keyboard plugged into USB which is able to play synths on my iPad and Mac.
I have an iPad Air 2 plugged into USB and running various synth apps like Moog Model 15, Animoog and others.I did watch one of the few YT videos titled “Music IO 1.20 — MIDI and Audio over USB” however the video did not cover ALL the steps for setting up the tracks on Logic as It pertains to Music IO. If I connect an audio cable from the iPad headphone out to the Apogee Duet line in, I am able to record that audio, but I thought that is what Music IO is supposed to do for me?
Any help would be greatly appreciated.
In Absolute Gratitude
Message from Patrick
I’m jumping in on this, as I’m the resident Logic guy. Hopefully, we can get you sorted out shortly (I’m out of town starting tomorrow, with limited email access!).
The key things in Logic:
1) Install the AU plug-in. This should have been part of the DMG disk image that you grabbed, when getting the musicIO server. All you should need to do is drag the plug-in into the folder (and there are links in the DMG itself — should be a breeze).2) In Logic — create an audio track. There’s a section for inserting effects, and you should see the musicIO plug-in as one of your options. Select that, and you should see a little pop-up window for configuring things. The app can stream four tracks of audio from the iPad to the Logic…
3) On the iPad — in musicIO, there’s a tab where you can insert IAA audio generators (synths); drop one in there, and audio from that should stream across USB, through the musicIO server, and into the AU plug-in.
4) Tricky part #1. Logic normally captures audio at the “input” of the track — for example, a raw, unprocessed guitar signal. Everything that happens after that (amp simulators, for example) is *not* recorded. This throws a bit of a monkey wrench into the mix, as the musicIO plug-in is sitting in the effects slot. So…..
4a) Create a second audio track. For the input of that track, use Bus 1.
4b) On the first track, use a “send” (at the bottom of the track) to route the output to Bus 1. There’s a little green volume dial that you’ll need to adjust, to bring the level up.
What’s going to happen is that the audio will come from the iPad, out through the musicIO plug-in on the first track (at the effects stage of the processing), and then get routed into the second audio track input, over bus 1. Because the audio is coming into the input stage of the second track, you can record it there.
A bit tricky — but once you’ve done it, it should start to make sense. The bus routing is a standard trick in modern DAWs.5) Tricky part #2. Logic will shut down audio processing on anything that it thinks is silent. So, on the first track, make sure you have an audio input (either an external audio, or a recorded loop). If Logic thinks that there’s no sound coming to the input, it stops all processing on the entire track — and any audio that would come from the plug-in gets ignored. The plug-in itself will give you an “audio muted” warning if it detects that happening.
—-
Cool so far? Once you’ve got that, you can add three other channels and synths if you like. Each iOS device can host four separate channels (and each channel can have four audio generators, and four effects apps — probably much more than your iPad could actually handle).Anyway, hopefully this will get you rolling! Let me know if you have trouble.
Best regards,
Patrick
Response from User
Hi Patrick,
Thank you for your fantastic help. Because of your instructions, I was able to get it working!
I did notice that when I set it up as you suggested, in order to record anything, I had to turn
on “input monitoring” on the first audio track or else nothing showed up in the 2nd track. I also
had to throw a loop in the first track to keep the Music IO plugin from being muted by LPX.In Absolute Gratitude