I wanted to find a few terms, so the terms that I wanted to find all relate to audio and let's take a look at what they are. The terms are stereo mono duel channel mano wave form levels pan and key frank stereo are two audio channels that run in sync, which spread sounds from the left speaker to the right speaker. Stereo is always two channels mano I'm about to confuse the crew in a studio ah, whole lot, so this is going to be fun. Just watch this because we're going to ask a question of my audience in just a second. The stereo clip spreads audio between the left and right speaker to give us what's called a sonic field audio that spreads from the right speaker to the left speaker. Then amano is a single clip that comes equally out both the left and the right speaker, but gives the perception of coming from the center of that sonic field between the two speakers. Dual channel mono is a special form of model, which is used in interviews all the time. Duel channel mono is where the thie i...
nterviewer is on channel one, and the guest being interviewed is on channel two away form, which we've already seen is a visual representation of the volume of the sound. Levels are a technical term that means the volume of a sound clip levels air loud meaning that the clippers louder levels there are low meaning the clip of soft pan it could only be done when you're working in a stereo environment pan adjusts whether the sound comes out the left speaker or the right speaker or somewhere in between and a key frame which scares a lot of people is a point of change during playback a key frame is used to adjust audio level so the part of the clip is louder than another part of the clip we use key frames always impaired to make this happen now before we talk about audio specifically I'm going to call our two audience members here are to a student gentlemen to people who know everything there is to know about just about everything I want to have them pass a quiz I would have asked jim but when I asked him earlier it wasn't a pretty sight so anyway let's take a look at at jack and lance because this really affects production and this effects editing it affects the quality of the work okay you guys ready you sitting down I think the two gentlemen are in a different studio but we'll do our best jack I want you to stare very closely at lance's face look very closely you're gonna pass the test on this and look very now look at this with great intensity look at ok got it okay now here's the question I've got for you how many mouths does lance have what a great answer the man is incredible yes he has one mouth therefore how many microphones should you use to mike lance? One individual one of your great answer one microphone now should you record this audio from lance's words of airy addition and learning should you record that as a stereo clip or is a mano clip out here? I'm going t even harder not only should you record it as a stereo clip or a samana clip but the output of the file the final distributed form is going to be a stereo so what do you record him out? I'd recorded in story that's the wrong answer I know and it's the one that most people make and I appreciate you following the guidance on how to set that up because when you and I were talking earlier I was afraid you were going to let me down but man you just perfect with that you need to win these things audio is surprisingly complex but here's something that need to pay attention to but a stereo there's nothing you can do with a stereo clip it just lives sits there but a model clip gives you the greatest amount of flexibility possible so when you are recording a speaker if I wanted if this is my sonic field here a stereo file fills that sonic field there's nothing you can do with it but if I have a model clip which is a single clip right here I could move that model clip this way and moving toward the right hand speaker or I can move it this way and move it toward the left hand speaker or I can put that mano clip on the center I've got complete flexibility on where that clip gets pan within a sonic field which is not possible when we are working in stereo the output is stereo but the mono clippers positioned within that stereo field which means a dual channel mano clip is required a dual channel model means that if I was but in this case jim a cz we know jim is our intrepid host takes a very brave person to be able to see a show like this anyway, if june is interviewing me, we're going to put one mike on jim and we're gonna record him on channel one on our camera then we're gonna put a second mike on me record me on channel two jim is not coming out the left town speaker I'm not coming out the right hand speaker we want both voices to come center which means june has to be a mano clip and I have to be a mano cliff both of us or pants center that's a duel mano clip and it's, the standard way that we record most interviews if you record stereo your europe, you're in deep trouble, and if you record mano, you're not well, this gets me into a bigger discussion, which is let's. Take a look at audio in and of itself, it's only because I know what's coming anyway when we when we talk, when we talk, we take a big, long full of air, and then the muscles in our long squeezed the air compressed air enforce that air in short bursts across our vocal cords, which vibrate, which sets up a pressure wave, which flows through the air, slamming up against the side of your drums, causing our ear drums to vibrate. And they trigger a siri's of neurons inside the ear canal, which flow on electrical signal into the center of the brain. And on ly a year and a half ago. Did we discover exactly where in the brain hearing actually occurs, but the sound is transformed from pressure energy at the ear drum into electrical energy what's interesting, the neurons in the brain fire up to five hundred times a second. We have no idea today how we can hear frequencies higher than five hundred cycles per second cause the neurons can't fire that fast. Biochemistry of a neuron prevents it from firing fashion and five hundred cycles per second so how do we hear frequencies higher than five it's magic a ce faras we know because nobody's been able to figure it out but here's the situation if this is time going this way and I have a pressure wave which is going this way I have an area of high pressure up here in an area of low pressure if it was all high pressure nothing could move because there's no variation movement is possible because things very so I've got this variation and pressure engineers like to measure it so they measure it as a voltage where this is plus one volt and this is minus one vultan that zero votes in what's called the zero crossing line this is what pressure hits up again start your drummond causes us to hear don't tell jim ok because he doesn't need to know but I have checked every computer that has shipped from the early nineteen eighties and not one single computer has an ear drum on it I know it sounds strange but computers are ear drum free it's a strange thing but true so if it requires an ear drum to hear a pressure wave how does a computer here audio a computer hates analog signals? Ah computer loves digital signals and this is the world's worst environment for a computer which is an analog signal so what computer to do is we slice this up into a very discreet time slices called a sample and it measures thie average voltage for that sample and it draws that records that stair step that average voltage which if I had any drawing skills whatsoever would closely approximate the shape of that curve in a stair step you kind of way except even someone who's artistically challenged as myself would say that there is no real shape relationship between this stair step down here and that curve up there how do we get them to be closer when we get them to be closer by putting the samples closer and closer making the sample short and shorter to create what would be called a pointless drawing which if I had the skills of then go I would be able to closely represent in dots the curve of that curve because the samples becoming closer and closer together more closely represent the curve. Life is good now I remember jim that you are a star physics student in high school and you recall from your high school physics last the nyquist serum and the nyquist there um says that if you take the sample rate and divided by two that equals the frequency response you do remember the nyquist there um from high school physics force and to think I thought you were asleep this is why we care about the sample right with sample rate, which for video is normally about forty eight thousand samples per second determines the high frequency response that we can get from our video from sorry from our audio this would mean that a sample rate of forty eight thousand samples per second would yield a high frequency response rate of twenty four thousand samples per second, which would be really meaningful if we knew what human hearing was. But there's nothing here that represents what human hearing is so let's look at phase two stay whiteboard don't move anywhere now human hearing is a line that goes from twenty cycles per second two twenty thousand cycles per second, where twenty thousand or twenty is such a low pitch that it feels more like a vibration, then a tone. The lowest note on a piano is twenty seven and a half cycles per second, so when you're pounding away at the lowest note on the left hand side of a piano it's about twenty seven and a half cycles per second twenty thousand cycles per second is such a high note that it sounds more like wind through the pine trees than does particular pitch. The highest note on a piano is four thousand one hundred eighty six cycles per second, which is a fraction of twenty thousand human hearing is twenty to twenty thousand cycles this is assuming that you're eighteen years old with normal human hearing as we get older, we lose the ability to hear high frequencies, and it should surprise no one to know that men lose their high frequency hearing faster than women do. If you're younger than eighteen, you can hear below twenty above twenty thousand cycles if you're doing something like sesame street for three year old, they've got the hearing of bats still here whatever you give him doesn't have to be mixed all just has to be there, but if you're doing something for sixty year olds man, you gotta work really hard to get people to understand what the heck is being said because they don't hear a cz well what's even more interesting is all the range of human hearing is twenty to twenty thousand cycles everything we hear is on that range whether it's noise or music or human speech it's still pressure waves that are floating through the air human hearing is also not a straight line it's it's not a straight line it's a hockey stick, its log rhythmic, not linear at the base we start with twenty cycles when we double the frequency, the pitch goes up by an active think of the actives on a piano or any other musical instrument twenty twenty cycles to forty cycles is an octave toe eighty is an active toe one sixty two three twenty two, six forty rounding slightly twelve fifty twenty, five hundred five thousand ten thousand twenty thousand it's interesting to me, there is as much change in pitch in those ten thousand cycles as there is those twenty cycles, both of those represent an octave human speech is a subset of that human speech is roughly two hundred cycles to roughly six or seven thousand cycles. There's, about two and a half, half actives of sound below human speech, and active on behalf of sound above human speech, which gets to another important point. Vowels like a e I o on you are low frequency sounds there, what give the voice its timber, its character, its section ihsan its warmth, but intelligibility comes from continents such as t and p and k the difference, in fact, between the letter f listen, frank and ss and sam, both of which are called fricka, tibbs, and both of which are caused by air whistling between the tip of your tongue and the roof of your mouth. The letter f the letter s the only difference is the presence or lack of presence of that listing sound that hisses sixty, one hundred cycles a second for a guy in a thousand cycles per second for a girl, if you can hear the hiss it's an s, and if you can't hear the hiss, it's, an f the problem is radio circuits and telephone circuits don't carry frequencies that high you could have absolutely perfect hearing and you'd be unable to hear an f or an s over an intercom system or over a telephone because they only passed frequencies up to thirty five hundred cycles if you've got someone it has ah loss of hearing and high frequencies they can't tell the difference between the letter f isn't franken ss and sound because their ears don't perceive that high frequency which means that you've got to do some work to overcome the loss of high frequency and airplane communications we talked like hotel lima peru extra tango we turned the letters into a word which has no other words similar to it so that x ray can only refer to the letter x or frank or sam can only refer to the letter f for the letter s in shows that we're mixing we would boost high frequencies which we talk about in our detailed premier training which I won't have time for today but it's important that we understand that everything that we hear relates to these frequencies and volume which is the third part that I want to cover before we go back to the computer again if you remember only one statement remember this one thing from this lecture on audio it's nice to know about frequencies and it's nice to know about sample rates but this one you have to understand at no time can your audio levels exceed zero d b not once, not ever not for a short period of time not at all, not not period zero d b, which is written lower case the upper case zero d b is the absolute maximum that your audio khun go you cannot go any louder than that. However, audio levels are log rhythmic justice frequencies or log rhythmic. We measure audio and negative numbers, which is known to on ly a few people, and they haven't shared the reason why, but audio levels are measured with negative numbers as thie audio level decreases by six d b our audio gang gets cut in half zero d be our audio is that one hundred percent level minus six it's at fifty percent level minus twelve visits, twenty five percent minus eighteen is twelve and a half percent the farther away from zero I get the significantly softer the audio gets, which means that we always want to have our audio bia's close to zero as we can get it without ever going over zero because the soonest goes over zero, it distorts. If you're going to a professional environment, it'll be rejected there won't even broadcast or cable cast the program if you're going to the web, it'll pop it'll crack a little whistle little hom it'll click sounds terrible and nobody will listen so we want to get our audio is close to zero d bia's we can without actually going over zero so how do we do that? Well, now we get to go to the software let's go back to this dr surf clip and this time I'm going to click on the wrench and I'm going to, uh expand the tracks I'm going to grab this little slider here and pull it up so I can see both tracks hang on a second let's just get this now in a bit close this these jagged lines represent the wave form. This represents a visual representation of the volume of the sound where the way form is very narrow the sound is very soft where the way form is very high the sound is very loud and the wave form will very on a syllable by syllable basis remember those puffs of air that I was talking about at the beginning of my presentation on audio a syllable is defined as a single puff of air going across our vocal cords, causing a specific sound at that particular time as you zoom into away form. Each one of these little clumps is a syllable. Now if we try to listen to a list, just hear what dr surface saying and we want to make sure that we can hear this but the audience going a bit soft and that's by intent I think it is inescapable that whatever success I've had is a sign effective, having been trained as a mathematician and later is a computer scientist. Uh, thinking logically trying to do designs that air. Okay, now, this is a trimming example. Uh, thinking logically, okay, let's, find the end of this class is a side effect of having been trained as a mathematician and later is a computer scientist. Ok, here I want to trim to that point option w I can trim that up noticed that it left gap, so I don't want to do that. I'm going to grab the trimming tool and slide it over and have it end with dr surf. And now let's hear the end of it and later is a computer scientist. But be a technologist you cannot understand and use technology if you're not literate and numerous and so on, so we have to have a of the core, but if we don't teach, people cannot here. I want to trim this to start with his quote here, but notice how hard it is to hear, having been trained as a man and look at what our levels look like, they're bouncing between negative twenty for negative twelve, fifty percent gayness six. Twenty five percent gain is twelve twelve and a half percent gain is eighteen six and seven six point seven, five, six and three quarters is twenty four were were given away ninety percent of our volume here because he's just too soft, that whatever success I've had is a side effect of having been trained as a mathematician and later is a computer scientist. But if we don't teach people, how do you think? Ok, so how do we adjust it? As we look at the audio clip notice there's this thin horizontal line going across here? This allows me to raise the level of a clip by click, hold and drag and noticed the yellow tool tips says, I'm making the sound say six d be louder and here I'm making the sound clips, and here I am, making the sound six devi louder a dp is defined as thie change and sound that's perceptible by somebody with normal human hearing so messing with a quarter d b or a half a d b you're never going to hear that, so don't obsess over really, really small, incremental dbm out see if you can hear a difference capable that whatever success I've had is a side effect of having been trained as a mathematician and later is a computer scientist, but if we don't teach people how clear the difference when we brought to gain up on the first clip, success in hand he is a sign effective, having been trained, I would still take it up a bit more, I think no, sorry, it only allows me to go up six hand it's a problem of learning too much software is assigned effective, having been trained is in there, but I can take this up a cz well, so we'll pull this one up, hang on up we go politician and later is a computer scientist, but if we don't teach people and there's a pop as we go from clip number one manages to clip to. But if we don't teach people, I'll show you how to get to that pop in the last part of this particular session. The point here is we need to make the sound able to be heard and most often sound it's recorded on set is recorded soft, intentionally because the audio engineers don't want to ever record audio on set and have it be so loud that it distorts which ruins to take, which requires them to get flogged by the director for screwing up the production. So to be safe, every sound engineer on the planet always records audio on set down low somewhere between negative eighteen and negative twelve d b to make sure that even if the talent starts to talk loudly, in a very emotional scene, they're not talking so loudly that the audio will distort, which means that we are always increasing dialogue, audio or increasing interview audio simply because we don't want to ever run the risk of the audio being too soft to be safely record too loud to be safely recorded. Music, on the other hand, is master just the opposite music is mastered two zero d b or a tenth of the d be below zero, because the tools that we use for mastering music are different than the tools that we use for recording audio on set, which means that when we bring music and music is always extremely loud, that needs to be decreased in volume, where dialogue and narrators and people that are just recording audio for the first time guest interviews everything else, human speeches recorded soft, it has to be brought up. Sound effects will be somewhere in the middle of that sound. Effects tend to be louder because they sound better if they're made soft and if they were soft and made loud. So sound effects tend to be in the mid range between the extremely loud volumes of music and the extremely low, uh, recordings of human speech, so we are always adjusting audio levels, which is what that slider allows us to do a couple of other notes, and here premier is different then other clips remember that jack and I were having this dialogue about whether something should be recorded stereo or something should be recorded mano at my request jack gave the answer that's typical for most people they're doing production, which is they will always take a single market phone. They will always recorded stereo and then they wonder why they have such limited control over the audio inside of their program where possible. Always record your audio dual channel motto, but most cameron audio people have not had the opportunity here this speech and the and they record sound incorrectly, which means that we need to actually separate the stereo pair into two dual channel mano pairs. This can only be done before a clip gets edited into the timeline and we do it by right mouse clicking on a clip and hang on. Well, I find it uh we go to modify audio channels notice here that if this was a stereo clipped the channel format would say stereo you need to select the channel format and set it to mono and make sure that the number of audio tracks is set to the number of microphones that you used. If you recorded just two tracks, this should be set to to some cameras allow us recording four tracks she could set this to four and then you would say I want channel one map too audio track one and channel to map the audio track two and then when you bring that file in, it comes in as two separate mano tracks as opposed to a single stereo pair, which you can't separate county just audio on separately and can't edit each individual tracks separately, right mouse, click on a clip or a group of selected clips, go to the modify menu goto audio channels and switch it from stereo, which is the recording made it by mistake. Tamano and then you can set how many tramp channels you want to have the software create a stereo pair and I'll just pull a clip in here. This is a music clip double click it when we doubleclick clipped a loaded up into the viewer. Oh, no, no, I knew I shouldn't have done that this's ah, music package that I like a great deal, but if I double click on it, it automatically loads the music package. I made a big point in my notes to say don't do this and because I don't even go there for right now pretend I didn't do that. I'll just ride that up to hear this sort of stereo pair looks like. The stereo pair has the left hand channel on top, the righthand channel on the bottom, always all software. The beginning of the clip is on the left. They under the clip is on the right, and this is what wave forms look like for music. Notice that the wave forms themselves are much more connected, much more continuous compared to the choppy wave forms of human speech that's because human speech is bursting and music tends to be much more smooth and flowing. What we want to do here is when I grabbed this clip and edit it down to the timeline, which will do by making this timeline smaller, say, minimize the tracks, I want to target it to go to the a three track and will click the period button, so it goes to hear elice just edit that down there's our audio clip noticed that the stereo pair shows up as a single track that when we maximize, expand the track, there's are too left and right channels, but they're contained inside a single clip as opposed to a mano clip where each channel is its own separate clip. Here I can edit each side of that dual channel mano individually here I have to edit the entire stereo perrot the same time if you're working with music, it's not a big deal, but if you're working with speech you want to make sure that you convert your clip from stereo to duel channel mono because it's going to make the editing process a lot easier inside premier is an audio mixer this audio mixer let's just get here let's go toe window audio track mixer and larger this is actually a pretty powerful mixer it's not as powerful as the mixer inside say, adobe audition or pro tools but it's much more powerful than the mixer say inside final cut seven or final cut ten which doesn't have a mixer at all ah mixer allows us to adjust the pan so for instance, here with dr surf that's um watched the there's two channels here is I play this see these two bouncing green dots this represents the sound of the left channel this represents the sound of the right channel let's mute this by clicking of newt, but that means that we won't hear the audio but I just want to illustrate what pan does. I'm going to grab this slider and drag it all the way to the left and let's take a listen to this audio and you'll hear that dr surface coming out the left hand channel I think it is inescapable that whatever the pan controls, where the soundest now we want to move him to be in the center of the audio I think it is inescapable that whatever success I've had and now we're going to move him all the way to the right I think it is inescapable that whatever success I think the power of doing a mano clip is that now I can say I want him just a little bit left of center, so when is simply grab this and drag it over and put him just a little bit so he's leaning a little bit left? I think it is let's make a little bit more so it's easy to see on the video meters inescapable that whatever success I hand now we want to lead him to the right it is a sign effective having been trained the reason we would do this is let's say that I'm having a dialogue between sajak and lamps on one side and myself on the other I would put their microphones a little bit to the right my microphone a little bit to the left so I open up the sonic field but I don't put them all the way to the right and all the way to the left because you've got no control over what speaker is being used. Teo listen to that audio may be the left hand speaker on your tv is blown out maybe the right hand headset doesn't work or your buddy's not in so you always want to take your main talent and keep them close to centre and leaned them left or right a little bit but not a lot to make sure that regardless of how good the audio playback devices you're going to be able to hear it land question so say there's a scene of somebody walking from left to right and he's talking can you keep frame this from left to right is what you find just about everything but I'm going to talk about key frames in the fourth segment so we'll hold off on that yes perfect we can also adjust levels inside the mixer for instance here will justice this is well jack remember earlier are saying we're gonna create stuff that's gonna make you cringe well this is a cringe worthy moment we're going tio let's just blow this up a bit and play this I e I think it is inescapable that whatever success I have is a side effect of having it's really easy to use the um the mixer to very quickly set pan and levels I think it is inescapable that whatever success I have is a side effect of having been trained isn't happening and let me just show you one more thing and then I'm going to move off audio because otherwise I will spend all day here audio is one of my favorite things to talk about this left hand job is called a channel strip the channel strip goes from the very top of the mixture to the bottom and there are four channel strips here channel strip channel, strip channel strip in the master channel strips once you understand how one channel strip works, they all work the same way notice that with this slider here this is how we set levels by grabbing the slider and dragging it up and down zero means that we're not making any changes for the level of the clip compared to the level of which it was recorded. As I move this up, I'm making the clip a louder than the level at which has recorded by a dp or softer than the level again by a debate here three d b as I move this down by making the clips softer compared to the level of which it was recorded. This is called a relative audio adjust it's relative to the level of which it was recorded. This right hand side of the channel strip is the absolute level off the sound so that here I measured the absolute loudest my audio khun b a zero if he's read clip light slide, I'm distorting and if I export to file with those red lights lighting, I would destroy my audio and there is not a technology on the planet that can get it to work, so when I play it, I think it is inescapable. This voluntary success I'm around negative eighteen to negative twelve on an absolute scale, he still soft? I'm going to boost his level is a side effect of having been trained as a mathematician and later is the computers. So now I've increased is gained by about four d b on the left, that's the relative change and on the right I'm measuring what the absolute level changes are. I think it is you notice there I got the red light, it's too hot! This means I've got to pull it back down. I've distorted that clip e I think it is just safe, just barely those red lights did not like all is good. I think it is inescapable that so the nice thing about the mixer is the mixture makes it easy to adjust pan, easy to adjust levels and easy to measure how each track is contributing to the master level. The master output and there's no harm, no foul. If your audio levels distort as you're in the process of mixing and setting levels the down, which is only done when you export or output that file or lay it off to take, which means, as long as you haven't made a permanent a version of this exported it, you always go back, change your levels to make sure they work okay. Jay marie would like to know, can you export audio, toe audition, edit it and bring it back into a mere into premiere and resync yes, and it is easy and it's my absolute favorite recommended technique. You go to the project that you want to x sport simon, I'm in s o t three highlighted go up to ed it's, not file you think it would be, but it is not, and if you look closely at the bottom of edited, says edit in audition, we're going to edit the entire sequence when you select this, it says, what are you going to call it? And it defaults to the sequence. Name the critical statement here the critical thing that you have to check his check export preview video, because if you don't check export preview video, you'll never be able to bring it back that must be checked and it's off by default. Also, you want to set audio handles to be at least five seconds, so those are the two settings you need to change. Audio handles two five seconds export preview video and you click ok. Ah, little humming and whistling exists as it's taking and it's creating a small mini file of your video that can then be moved over into audition. It also is exporting not the audio itself button xml file, which is simply a serious of pointers the point to where your media is stored, it moves that xml file over to audition, it moves the video file over into audition it then starts audition from your hard disk loads all that stuff automatically with all the files in place, so that after just a few seconds, hoodie kazuki audition opens up inside your application and everything is loaded and ready for editing tada just like that, then when you're done, you go back to go back to hang on, we'll get there. There we go, go back to the multi track menu because you would think it would go on to the file menu and because it isn't out of the file menu have trouble remembering it, you say export the audio to adobe premier pro it then says, what do you want to create? Do you want to create the track? Is a stem no, I want to mix down the entire session. I want to create a stereo file, so I say mixed down stereo file, click export, get, then mixes the file switches back to audition, says let's just put this to a new audio track right here, click ok, click the wrench, minimize all the tracks and turn off mute due to do everything except our final mix and we're done I mean the process of moving back and forth between the two is really simple and very clean and I unless you're doing just one or two speakers it's so much better to do your audio mix inside audition but I haven't had a chance to talk about audition here creative lives so rather than even get into that, we'll leave that for another time but talking about audio mixing in audition I mean anybody could create music I mean just have to listen to music today but not anybody can do a video mix and have it sound good that takes talent, so we'll save that for another speech perfect, thank you so geno would like to know is it possible to adjust the audio to give his voice more resident? Yes, you don't necessarily need more residents, but what you do need is you need to warm up the bottom with base and you use the parametric e q filter for that and you got to bring me back for audio that's all there is to it we just don't have enough time to cover, but parametric e q filter is going to give you that warmth the bottom end, the richness that you need in a voice and now you're leading the stuff that I could talk about for hours great okay so be dubbed asks says so you can on ly modify the clip to mano before it goes into the timeline yes well no wait wait let me restate that I want to use a different word than modify you can on ly convert a stereo clip to mano before it goes into the timeline and you do that by right mouse clicking on whatever clips you have selected I got to select a clip that's got audio there it is right mouse click on it and you go to the modified menu now he uses modified but the reason I don't like it is we're not really modifying it we're just converting it which is dancing on the head of a pin audio channels and then you just said it from stereo tamano so one of our guests says you had the levels set under read on the sliders can you briefly describe the use of the other options right latch and touch my general recommendation when you're getting started for right now I think I agree with this is don't worry about setting key frames within the track said key frames within the clip track he frames are better and more flexible especially if you're working with a control surface but you can suddenly create thousands of key frames without knowing what you're doing whereas if you do the manually at least things are under control so these all deal with audio key frames to control volume pan or audio filters leave it to read, which means no key frames or set. If you go, if you are going to use this always used touch it's, my favorite, because it gives you the respect, the results that you expect latch on, right. I can give you interesting and somewhat somewhat anomalous results. So leave it to read, which means no key frames or set or touch.