Flash

add news feed

tweet a story

Learn how to use virtualization to improve access to Adobe Connect and foster better testing and cost savings.
Learn how to use virtualization to improve access to Adobe Connect and foster better testing and cost savings.
about 7 hours ago
Send to KindleFirst of all, I am not a musician by any stretch of the imagination. But that fact will become obvious all too soon. But if you’re going to make sound with code, you wind up either making sound effects or music. So le...
Send to KindleFirst of all, I am not a musician by any stretch of the imagination. But that fact will become obvious all too soon. But if you’re going to make sound with code, you wind up either making sound effects or music. So let’s start with music. My goal here was to create a simple “tracker” application. You program in notes, it plays those notes back in progression to form a song. Of sorts. I’m going to keep the song very simple. Mary Had a Little Lamb. It’s a song that you can play every note with one beat, so you don’t have to mess around with different length notes. Here’s a simple version of the song transcribed into notes and rests: g f e f g g g - f f f - g b b - g f e f g g g g f f g f e – - - A quick search on the ‘net gives you the frequency values for the notes b, e, f and g, the only ones you’ll need for this song. Code that into an object: scale = { g: 392, f: 349.23, e: 329.63, b: 493.88 } And then you can just code the song as a string. song = "gfefgg-fff-gbb-gfefggggffgfe---"; Now you can create an AudioContext and an oscillator and set an interval that runs at a certain speed. In the interval callback, get the next note, find its frequency and set the oscillator’s frequency to that value. Like this: window.onload = function() { var audio = new window.webkitAudioContext(), osc = audio.createOscillator(), position = 0, scale = { g: 392, f: 349.23, e: 329.63, b: 493.88 }, song = "gfefgg-fff-gbb-gfefggggffgfe---"; osc.connect(audio.destination); osc.start(0); setInterval(play, 1000 / 4); function play() { var note = song.charAt(position), freq = scale[note]; position += 1; if(position >= song.length) { position = 0; } if(freq) { osc.frequency.value = freq; } } }; Now this actually works and you should be able to recognize the melody somewhat. But it leaves a lot to be desired. The biggest thing is that there is no separation between notes. You have a single oscillator running and you’re just changing its frequency. This ends up creating a sort of slide between notes rather than distinct notes. And when there’s a rest, well, there is no rest. It just keeps playing the last note. There are various ways to try to handle this. One would be to call stop() on the oscillator, then change its frequency, then call start() again. But, when you read the documentation, it turns out that start and stop are one time operations on an oscillator. Once you call stop, it’s done. That particular oscillator cannot be restarted. So what to do? The suggested answer is actually to create a new oscillator for each note. Initially, this sounds like a horrible idea. Create and destroy a new object for every single note in the song??? Well, it turns out that it’s not so bad. There are some frameworks that create a sort of object pool of notes in the background and reuse them. But the downside to that is that every note you create and start continues playing even if you can’t hear it. It’s your choice, and I suppose you could do all sorts of profiling to see which is more performant. But for Mary Had a Little Lamb, I think you’ll be safe to create a new oscillator each time. To do this, make a new function called createOscillator. This will create an oscillator, specify its frequency and start it. After a given time, it will stop and disconnect that oscillator. You can then get rid of the main osc variable in the code and call the createOscillator function when you want to play a note. window.onload = function() { var audio = new window.webkitAudioContext(), position = 0, scale = { g: 392, f: 349.23, e: 329.63, b: 493.88 }, song = "gfefgg-fff-gbb-gfefggggffgfe---"; setInterval(play, 1000 / 4); function createOscillator(freq) { var osc = audio.createOscillator(); osc.freq
3 days ago
Send to KindleIn yesterday’s post I covered the bare bones basics of creating audio with the Web Audio API. In this post, I’ll demonstrate one way to start creating some interactivity. One of the simplest and most dynamic wa...
Send to KindleIn yesterday’s post I covered the bare bones basics of creating audio with the Web Audio API. In this post, I’ll demonstrate one way to start creating some interactivity. One of the simplest and most dynamic ways to capture interactivity on a computer is by simply reading the mouse position. The strategy for this next experiment is to use the mouse’s y-position to control the frequency of a single oscillator. The code for this is super simple: window.onload = function() { var context = new window.webkitAudioContext(), osc = context.createOscillator(), h = window.innerHeight; osc.connect(context.destination); osc.start(0); document.addEventListener("mousemove", function(e) { osc.frequency.value = e.clientY / h * 1000 + 300; }); }; Create an AudioContext and an oscillator, get the window dimensions, connect and start the oscillator and listen for mousemove events. In the mousemove handler, e.clientY / h will be a number from 0 to 1. Multiply this by 1000 and add 300 and you’ll have a frequency from 300 to 1300. This gets assigned to the oscillator’s frequency value. Move your mouse around the screen and get different pitches. Simple. Remember, the above has only been tested in the latest version of Chrome at the time of this writing. Other configurations may work; some may require some changes. Now what about they x-axis? Yesterday you had two oscillators going. Let’s try to hook the mouse’s x-position to that second oscillator. window.onload = function() { var context = new window.webkitAudioContext(), osc = context.createOscillator(), osc2 = context.createOscillator(), gain = context.createGain(), w = window.innerWidth, h = window.innerHeight; osc.frequency = 400; osc.connect(context.destination); osc.start(0); gain.gain.value = 100; gain.connect(osc.frequency); osc2.frequency.value = 5; osc2.connect(gain); osc2.start(0); document.addEventListener("mousemove", function(e) { osc.frequency.value = e.clientY / h * 1000 + 200; osc2.frequency.value = e.clientX / w * 30 + 5; }); }; This is much the same as the final code from yesterday, but now you’re using the mouse x- and y-positions to control the frequencies of both oscillators. Move your mouse all over the screen now and you’ll get all kinds of science-fictiony sounds. I picture a 1950′s flying saucer taking off, or maybe an alien ray gun. Mess with the frequency ranges of both oscillators and try changing the oscillator types for both – mix and match square, sawtooth and triangle waves for all kinds of interesting results. Send to Kindle
7 days ago
Send to KindleThis is something I’ve wanted to play with for a long time. So the other day I buckled down and started searching around. Yes, you can create sound in JavaScript. In some browsers. Supposedly, it works in Chrome 14, F...
Send to KindleThis is something I’ve wanted to play with for a long time. So the other day I buckled down and started searching around. Yes, you can create sound in JavaScript. In some browsers. Supposedly, it works in Chrome 14, Firefox 23, Opera 15 and Safari 6. Not IE. But, I’ve only tested this in Chrome. So for now, consider this something experimental and fun to lay with, not for your super awesome works-in-every-browser game. I found several sites that had build up complex libraries based on the Web Audio API. These weren’t the greatest things to learn the basics from, but I eventually was able to pare some of the code down to the bare minimum needed to create some sounds in the browser. There’s also the MDN documentation here: https://developer.mozilla.org/en-US/docs/Web_Audio_API, which is a great reference, but not a step-by-step tutorial for creating sounds. There are a few tutorials linked to from there as well, but none really covered what I was interested in. So, to get down to it, let’s create some noise. First you need to create an AudioContext object. This is similar in concept to HTML5′s 2D graphics context for canvas. The context is the overarching object that you’ll use to create all the pieces that will create the sound you’re going to make. For Webkit-based browsers, you get an AudioContext like so: var context = new window.webkitAudioContext(); The AudioContext has a few properties, the most important one being “destination”. The destination is basically the output of the context, where the sound goes. You can think of it as your speakers. The next thing you need to know about the Web Audio API is that it is a node based system. You use the AudioContext to create various nodes that are used to create and shape sounds. Nodes have inputs and outputs that you can use to hook various nodes together into different configurations. The most direct way to create a sound is to create an oscillator node. An oscillator node has zero inputs and one output. You can hook that output to the destination of your context. You’ll also need to specify a frequency for the oscillator. 440 hz will create the musical note, A. Here’s the code: var context = new window.webkitAudioContext(); var osc = context.createOscillator(); osc.frequency.value = 440; osc.connect(context.destination); osc.start(0); And here’s how this looks from a node view: You have an oscillator node with a frequency of 440 connected to the destination of the AudioContext. Call start(0) and you should get an annoying sound coming out of your speaker. The oscillator node has a couple of other properties. One is “type”. This is the type of wave it uses to generate the sound. It defaults to “sine”. But, you can try “square”, “sawtooth” or “triangle” and see how they sound by doing this: osc.type = "sawtooth"; There’s also a “custom” type, but that involves creating and setting a custom wave table. If you’re into it, go for it. Anyway, wasn’t that easy? Let’s expand on it and create another oscillator that messes with the first one. To do this, you’ll create two new nodes, an oscillator node and a gain node. A gain node is usually used to change the volume of a sound, but you’ll be using it here to alter the frequency of the original oscillator node. You’ll also create another, slower oscillator node. This new oscillator node’s output will be connected to the gain node. A gain node has a single input and a single output. As the new oscillator goes up and down at a frequency of 1 hz (once per second), it will affect the output of the gain node. A gain node also has a value property. If you set that to 100, then the gain node’s output will cycle from +100 to -100 as the new oscillator slowly cycles. Now you need to hook this +/- 100 g
7 days ago
Introducing HTML5 extensions, the new way to create panels.
Introducing HTML5 extensions, the new way to create panels.
8 days ago
Introducing HTML5 extensions, the new way to create panels.
Introducing HTML5 extensions, the new way to create panels.
8 days ago
I did a shader before to mask a texture so that it doesn’t have to be rectangular. Turns out I was using an old version of NGUI, so when I updated (to version 2.65) my previous shader didn’t work anymore. Also Nicki Thomas Ha...
I did a shader before to mask a texture so that it doesn’t have to be rectangular. Turns out I was using an old version of NGUI, so when I updated (to version 2.65) my previous shader didn’t work anymore. Also Nicki Thomas Hansen made another shader so that you could use the masked texture inside a clipped panel. In doing so he also explained what NGUI was doing and how it was selecting the correct shader. So, based on his AlphaClip, I remade my shader so that it works on the new version of NGUI. Here is the code for it: Shader "Unlit/Transparent Colored Masked" { Properties { _MainTex ("Base (RGB), Alpha (A)", 2D) = "white" {} _AlphaTex ("MaskTexture", 2D) = "white" {} } SubShader { LOD 100 Tags { "Queue" = "Transparent" "IgnoreProjector" = "True" "RenderType" = "Transparent" } Cull Off Lighting Off ZWrite Off Fog { Mode Off } Offset -1, -1 Blend SrcAlpha OneMinusSrcAlpha Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata_t { float4 vertex : POSITION; float2 texcoord : TEXCOORD0; fixed4 color : COLOR; }; struct v2f { float4 vertex : SV_POSITION; half2 texcoord : TEXCOORD0; fixed4 color : COLOR; }; sampler2D _MainTex; float4 _MainTex_ST; sampler2D _AlphaTex; v2f vert (appdata_t v) { v2f o; o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex); o.color = v.color; return o; } fixed4 frag (v2f i) : COLOR { fixed4 col = tex2D(_MainTex, i.texcoord) * i.color; fixed4 a2 = tex2D(_AlphaTex, i.texcoord); if (a2.a < col.a) col.a = a2.a; return col; } ENDCG } } } Funny how writing my previous post solved my future problem by someone else writing an answer post. I will update the a title="Masking textures using shaders NGUI" href="http://www.zedia.net/2013/masking-textures-using-shaders-ngui/" target="_blank">previous posta> so that it points to this post also.
11 days ago
Send to Kindle At long last, the book is available for sale. It took a few days to get through all the copy edits and then a few last minute tweaks, then I hit the big old publish button. A few hours later, it was live … except in ...
Send to Kindle At long last, the book is available for sale. It took a few days to get through all the copy edits and then a few last minute tweaks, then I hit the big old publish button. A few hours later, it was live … except in the US, Japan, India and Canada. I’m not sure if there is just some lag in the system for certain countries. I contacted support, they said they’d look into it and I never heard back, but the book soon went live on the US and Canadian stores. As of this writing it’s still listed as unavailable for purchase in India and Japan. It may resolve itself, but I’ll keep pushing on it anyway. As mentioned previously, the whole process was a great experience and feels like a big accomplishment to have pulled it off from inception to being in the store, all under my own steam. Anyway, go get it if this is the kind of thing you’re interested in. I hope you enjoy it. If so, it would be really nice to start seeing some good reviews stacking up there. Send to Kindle
14 days ago
I love Kickstarter. I think it is one of the best inventions from this decade. It gives the people a say in what gets made or not and that is a very powerful thing. So I decided that I would participate more to crowd funding. Notably in ...
I love Kickstarter. I think it is one of the best inventions from this decade. It gives the people a say in what gets made or not and that is a very powerful thing. So I decided that I would participate more to crowd funding. Notably in the video game area. Since I don’t really have time to get into huge games, the type of games usually found on Kickstarter are perfect for me. I’m going to try and buy most of my games from crowd funding. Here are two games that I participated in: The Fall http://www.kickstarter.com/projects/189665092/the-fall-dark-story-driven-exploration-in-an-alien?ref=live This seems to be made by one guy alone and it looks awesome. Just for that it was worth it to help it a bit. Also who wouldn’t want a deeper /dark metroid? Sunless Sea http://www.kickstarter.com/projects/failbetter/sunless-sea?ref=live I liked the looks on this one, plus they said their inspiration was Don’t starve, FTL and roguelikes; that has to be good. So there you have it, I think you should back those projects too, so that they are made more awesome. I will leave you with this tip, do not back too many projects at the same time, because you’ll get too many email updates from them.
16 days ago
Add a custom tab in the viewer that functions as a store and library on the iPhone using version 2 of the library and store APIs.
Add a custom tab in the viewer that functions as a store and library on the iPhone using version 2 of the library and store APIs.
17 days ago