This is the blog for www.SoftwareProdigy.com. Here you will find any updates done to our projects or future projects. Feel free to leave any comments.
Monday, November 23, 2009
Localization/Internationalization/i18n for iPhone apps
I have found this internationalization guide on how to make your iPhone app behave appropriately according to the iPhone's language preference. Very easy to get started. Ideally the strings are externalized as from day 1. It's a bit time consuming externalizing them later on :(
Friday, November 20, 2009
High Score System in place
I'm still working on the iphone game, but only a couple of hours a week unfortunately. I have created a High Score system using Rails on Heroku.com. Heroku is a very nice Rails hosting service - which is also free for a basic setup and easily be ramped up in a jiffy.
The next think I need to check is the in-app purchases for downloadable content. Should be interesting...
The next think I need to check is the in-app purchases for downloadable content. Should be interesting...
Sunday, October 25, 2009
Updating the Provisioning Profile
When my provisioning profile expired, I had to create another one. But I had problems hooking it with the project. This how-to saved the day.
Tuesday, September 29, 2009
Unit testing on the iPhone SDK
I have been busy preparing lectures (still am) but I'm still alive :)
Introduction
I realised that I was adding code to the iPhone project more like in a prototyping stage. I wasn't doing any tests but I still tried to organize the classes as best as I could. Now I'm starting to loose confidence when I'm going to add some features. Not a good feeling. I want to switch to TDD (Test Driven Development), where you first write a failing test, and then write the production code. The excellent side effect of this is that the design of the classes are automatically decoupled. However I need to first convert my project to be covered by tests. Found an interesting book called "Working Effectively with Legacy Code" where it gives a lot of tips on how to breakdown classes, etc. I'm still reading it in my free time, but I will soon be putting the tips to work.
Setting up test harness
I searched for setting up a test harness in XCode. Version 3 supports unit tests using the SenTesting classes. I found a presentation and an Apple support page to set up a test harness. It also describes for setting up functional testing, however I will be only using unit (logic) tests for now.
Some tips to finalize the setup
Remember to drag any .m files into the Compile Sources in the logic tests target and also any libraries.
Also edit the Active target LogicTests and under Build tab, in Gcc 4.2 - Language section, turn on Precompile Prefix Header and also set the Prefix Header to_Prefix.pch
It is very important to this as otherwise it will give you loads of errors like could not find CGPoint class, etc
NSLog...where are they output?
What about logging when running the tests? Since to execute the unit tests you just need to build the application, there is not console out from XCode, however you can view the output of NSLog statements in the Console application. Launch Console from Spotlight.
Code Coverage
I also wanted to have code coverage, to know how much of the code is being tests by the tests. I found an excellent tutorial on setting up code coverage. It uses CoverStory - a tool for viewing the results of code coverage.
Now I need to rewire my brain to think in tests...
Introduction
I realised that I was adding code to the iPhone project more like in a prototyping stage. I wasn't doing any tests but I still tried to organize the classes as best as I could. Now I'm starting to loose confidence when I'm going to add some features. Not a good feeling. I want to switch to TDD (Test Driven Development), where you first write a failing test, and then write the production code. The excellent side effect of this is that the design of the classes are automatically decoupled. However I need to first convert my project to be covered by tests. Found an interesting book called "Working Effectively with Legacy Code" where it gives a lot of tips on how to breakdown classes, etc. I'm still reading it in my free time, but I will soon be putting the tips to work.
Setting up test harness
I searched for setting up a test harness in XCode. Version 3 supports unit tests using the SenTesting classes. I found a presentation and an Apple support page to set up a test harness. It also describes for setting up functional testing, however I will be only using unit (logic) tests for now.
Some tips to finalize the setup
Remember to drag any .m files into the Compile Sources in the logic tests target and also any libraries.
Also edit the Active target LogicTests and under Build tab, in Gcc 4.2 - Language section, turn on Precompile Prefix Header and also set the Prefix Header to
It is very important to this as otherwise it will give you loads of errors like could not find CGPoint class, etc
NSLog...where are they output?
What about logging when running the tests? Since to execute the unit tests you just need to build the application, there is not console out from XCode, however you can view the output of NSLog statements in the Console application. Launch Console from Spotlight.
Code Coverage
I also wanted to have code coverage, to know how much of the code is being tests by the tests. I found an excellent tutorial on setting up code coverage. It uses CoverStory - a tool for viewing the results of code coverage.
Now I need to rewire my brain to think in tests...
Friday, August 21, 2009
Creating sparks/bolts in Opengl ES
I wanted to create some sparks/lightening/bolts kinda thing on the iPhone/iPod touch using OpenGL ES. I made a quick google and found this Delphi Opengl Project. I adapted it and created a quick spark object. Here is the draw method involved:
The vertices array and yDisp array would need to be malloced appopriately in the init method (and freed in the dealloc):
vertices = malloc(sizeof(GLfloat) * 3 * STEPS * 2);//3 coordinates for each vertex, 2 vertices for each step
yDisp = malloc(sizeof(float) * STEPS);
Some good values for a decent spark would be length 200, halfWidth 1, amplitude 50.
STEPS was #defined to 40 for now.
This is just the beginning. I need to add more effects like glowing endpoints and subtle particle systems.
-(void) draw
{
#define random ((float)random()/RAND_MAX)
// initialise the start and end points
yDisp[0] = yDisp[STEPS-1] = 0;
// calculate new Y coordinate. new = old + random.
for (int i = 1; i <> yDisp[i-1] + 0.075f) yDisp[i] = yDisp[i-1]+0.075f;
if (yDisp[i] <> yDisp[i+1] + 0.075f) yDisp[i] = yDisp[i+1]+0.075f;
if (yDisp[i] <> 0.5f) yDisp[i] = 0.5f;
if (yDisp[i] < -0.5f) yDisp[i] = -0.5f;
}
// Prepare the vertices as a Triangle strip
float rnd;
for (int j = 0; j < STEPS; j++)
{
rnd = 0.04f*(random-0.5f); //0.4 * random between -0.5 and 0.5
vertices[j*6 + 0] = length*j/STEPS + rnd; //x between 0 and length with some slight randomness
vertices[j*6 + 1] = -halfThickness + (yDisp[j] + rnd) * amplitude; //y
vertices[j*6 + 2] = 0; //rnd; //z
vertices[j*6 + 3] = length*j/STEPS + rnd; //x
vertices[j*6 + 4] = halfThickness + (yDisp[j] + rnd) * amplitude; //y
vertices[j*6 + 5] = 0; //rnd; //z
}
// Draw the vertices
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glDisable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
glColor4f(0.4f, 0.3f, 0.8f, 1.0f);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(x,y,0);
glRotatef(angleInDegrees, 0, 0, 1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, STEPS*2);
glPopMatrix();
}
The vertices array and yDisp array would need to be malloced appopriately in the init method (and freed in the dealloc):
vertices = malloc(sizeof(GLfloat) * 3 * STEPS * 2);//3 coordinates for each vertex, 2 vertices for each step
yDisp = malloc(sizeof(float) * STEPS);
Some good values for a decent spark would be length 200, halfWidth 1, amplitude 50.
STEPS was #defined to 40 for now.
This is just the beginning. I need to add more effects like glowing endpoints and subtle particle systems.
Wednesday, August 05, 2009
gluUnProject for iPhone / OpenGL ES
I wasn't surprised that gluUnProject did not exist in the iPhone SDK since they did not provide a glu library implementation. However as I have migrated the gluLookAt, it was time to migrate the gluUnProject. It has been ages since I had used this method. The only thing I remebered was that it was used to know where you have clicked in your 3d world, by using a 2d device, i.e. a mouse. In our case since we're using iPhone, it's the capacitive touch screen which gives us back 2d coordinates of where we touched on the screen.
Since I'm using a perspective view, I needed to translate those coordinates into world coordinates. In my case, on the current project I'm working on, I would finally want to know which button the user click from a grid.
I got the gluUnProject code from MESA. However the code needed some minor adjustments, namely converting everything from double to float:
So I thought to use the value of zero, but in effect it was always giving the coordinates of the center of the screen. In reality it was giving me the coordinates of the camera position. When I tried 1 instead of 0, it was giving me coordinates that made more sense but still not 100% precise.
The solution was to unproject twice, one time at the near plane (z = 0) and one time at the far plane (z = 1) as discussed in a forum. That gives you a ray which can then be used to make an intersection with a plane and get the exact coordinates. Thus converting from 2d to 3d.
Here's the method I have used which uses the migrated gluUnProject using just floats:
-(CGPoint) getOGLPos:(CGPoint)winPos
{
// I am doing this once at the beginning when I set the perspective view
// glGetFloatv( GL_MODELVIEW_MATRIX, __modelview );
// glGetFloatv( GL_PROJECTION_MATRIX, __projection );
// glGetIntegerv( GL_VIEWPORT, __viewport );
//opengl 0,0 is at the bottom not at the top
winPos.y = (float)__viewport[3] - winPos.y;
// float winZ;
//we cannot do the following in openGL ES due to tile rendering
// glReadPixels( (int)winPos.x, (int)winPos.y, 1, 1, GL_DEPTH_COMPONENT24_OES, GL_FLOAT, &winZ );
float cX, cY, cZ, fX, fY, fZ;
//gives us camera position (near plan)
gluUnProject( winPos.x, winPos.y, 0, __modelview, __projection, __viewport, &cX, &cY, &cZ);
//far plane
gluUnProject( winPos.x, winPos.y, 1, __modelview, __projection, __viewport, &fX, &fY, &fZ);
//We could use some vector3d class, but this will do fine for now
//ray
fX -= cX;
fY -= cY;
fZ -= cZ;
float rayLength = sqrtf(cX*cX + cY*cY + cZ*cZ);
//normalize
fX /= rayLength;
fY /= rayLength;
fZ /= rayLength;
//T = [planeNormal.(pointOnPlane - rayOrigin)]/planeNormal.rayDirection;
//pointInPlane = rayOrigin + (rayDirection * T);
float dot1, dot2;
float pointInPlaneX = 0;
float pointInPlaneY = 0;
float pointInPlaneZ = 0;
float planeNormalX = 0;
float planeNormalY = 0;
float planeNormalZ = -1;
pointInPlaneX -= cX;
pointInPlaneY -= cY;
pointInPlaneZ -= cZ;
dot1 = (planeNormalX * pointInPlaneX) + (planeNormalY * pointInPlaneY) + (planeNormalZ * pointInPlaneZ);
dot2 = (planeNormalX * fX) + (planeNormalY * fY) + (planeNormalZ * fZ);
float t = dot1/dot2;
fX *= t;
fY *= t;
//we don't need the z coordinate in my case
return CGPointMake(fX + cX, fY + cY);
}
Since I'm using a perspective view, I needed to translate those coordinates into world coordinates. In my case, on the current project I'm working on, I would finally want to know which button the user click from a grid.
I got the gluUnProject code from MESA. However the code needed some minor adjustments, namely converting everything from double to float:
- any GLdouble had to be replaced with GLfloat
- any double numbers, e.g. 0.0 or 1.0 I converted them to their respective float counterpart, e.g. 0.0f or 1.0f
- and math functions which accepted/returned double, I replaced them with their float versions, e.g. fabs -> fabsf
So I thought to use the value of zero, but in effect it was always giving the coordinates of the center of the screen. In reality it was giving me the coordinates of the camera position. When I tried 1 instead of 0, it was giving me coordinates that made more sense but still not 100% precise.
The solution was to unproject twice, one time at the near plane (z = 0) and one time at the far plane (z = 1) as discussed in a forum. That gives you a ray which can then be used to make an intersection with a plane and get the exact coordinates. Thus converting from 2d to 3d.
Here's the method I have used which uses the migrated gluUnProject using just floats:
-(CGPoint) getOGLPos:(CGPoint)winPos
{
// I am doing this once at the beginning when I set the perspective view
// glGetFloatv( GL_MODELVIEW_MATRIX, __modelview );
// glGetFloatv( GL_PROJECTION_MATRIX, __projection );
// glGetIntegerv( GL_VIEWPORT, __viewport );
//opengl 0,0 is at the bottom not at the top
winPos.y = (float)__viewport[3] - winPos.y;
// float winZ;
//we cannot do the following in openGL ES due to tile rendering
// glReadPixels( (int)winPos.x, (int)winPos.y, 1, 1, GL_DEPTH_COMPONENT24_OES, GL_FLOAT, &winZ );
float cX, cY, cZ, fX, fY, fZ;
//gives us camera position (near plan)
gluUnProject( winPos.x, winPos.y, 0, __modelview, __projection, __viewport, &cX, &cY, &cZ);
//far plane
gluUnProject( winPos.x, winPos.y, 1, __modelview, __projection, __viewport, &fX, &fY, &fZ);
//We could use some vector3d class, but this will do fine for now
//ray
fX -= cX;
fY -= cY;
fZ -= cZ;
float rayLength = sqrtf(cX*cX + cY*cY + cZ*cZ);
//normalize
fX /= rayLength;
fY /= rayLength;
fZ /= rayLength;
//T = [planeNormal.(pointOnPlane - rayOrigin)]/planeNormal.rayDirection;
//pointInPlane = rayOrigin + (rayDirection * T);
float dot1, dot2;
float pointInPlaneX = 0;
float pointInPlaneY = 0;
float pointInPlaneZ = 0;
float planeNormalX = 0;
float planeNormalY = 0;
float planeNormalZ = -1;
pointInPlaneX -= cX;
pointInPlaneY -= cY;
pointInPlaneZ -= cZ;
dot1 = (planeNormalX * pointInPlaneX) + (planeNormalY * pointInPlaneY) + (planeNormalZ * pointInPlaneZ);
dot2 = (planeNormalX * fX) + (planeNormalY * fY) + (planeNormalZ * fZ);
float t = dot1/dot2;
fX *= t;
fY *= t;
//we don't need the z coordinate in my case
return CGPointMake(fX + cX, fY + cY);
}
Tuesday, August 04, 2009
Disabling Texture Units
After I managed to create multitextured polygons, I ran into another problem. After flushing the multitextured vertex array, I needed to flush a single textured vertex array for the HUD (Heads-up display, i.e. timer, score etc). The problem looked like it was using the texture coordinates of the previous vertex array.
After some googling and trying to understand what was happening, I realized that I needed to disable the 2nd texture unit. When you are using texture array pointers we use glClientActiveTexture, so before drawing the non-multitextured vertex array, I needed to disable the second texture unit and then switch back to the first texture unit
//disable 2nd texture unit
glClientActiveTexture(GL_TEXTURE1);
glDisable (GL_TEXTURE_2D);
//back to the 1st texture unit
glClientActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [_texture name]);
...
Initially I tried using glActiveTexture instead of glClientActiveTexture. The glActiveTexture is used when display lists/immediate mode is used (glBegin...glEnd). And obviously nothing was happening.
After some googling and trying to understand what was happening, I realized that I needed to disable the 2nd texture unit. When you are using texture array pointers we use glClientActiveTexture, so before drawing the non-multitextured vertex array, I needed to disable the second texture unit and then switch back to the first texture unit
//disable 2nd texture unit
glClientActiveTexture(GL_TEXTURE1);
glDisable (GL_TEXTURE_2D);
//back to the 1st texture unit
glClientActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [_texture name]);
...
Initially I tried using glActiveTexture instead of glClientActiveTexture. The glActiveTexture is used when display lists/immediate mode is used (glBegin...glEnd). And obviously nothing was happening.
Saturday, August 01, 2009
Multitexturing on Opengl ES
I wanted to create a random highlight animation effect on the buttons that glides over them every now and then. At first I was going to do it in a multipass approach, i.e. first draw the button polygons, then the highlight over them. However it should be more efficient (with loads of polygons anyway) if multitexturing is used. The iPhone has 2 Texture Units (TU) and so I can take advantage of that.
I never had done multitexturing in opengl before, so I had to learn the concept. If you understand the blending functions with the frame buffer, then multitexturing will be easy. The difference is that you can combine the results of texture units. Since we have two TUs we can make the first texture blend with the frame buffer, and then overlay the second texture by adding it to the result of the previous TU (GL_PREVIOUS).
You dictate how the TUs will combine by specifying whether GL_COMBINE_RGB is GL_MODULE, GL_ADD, GL_DECAL, and GL_REPLACE. One should take a look at what each one will calculate to, and also experiment a bit with them.
Here is more detailed information about texture combiners using a fixed pipeline, and what it would like if we used shaders. I must admit that shaders are easier to read, at least such simple shaders, but shaders are only supported in iPhone 3GS.
Before setting up the multitexturing, I set up the texture coordinates for both TUs:
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2, GL_FLOAT, sizeof(VertexDataMultiTextured), &vd[0].uv0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(2, GL_FLOAT, sizeof(VertexDataMultiTextured), &vd[0].uv1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
Then I set up how the TUs should behave. In my case I had a texture for the button (in an atlas) and used the following openGL commands
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [_texture name]);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
//blend the texture with the framebuffer(GL_PREVIOUS)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_BLEND);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
//use the texture's alpha channel
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA);
//------------------------
And then I selected the second texture unit and added the color information of the glow texture (which was in the same atlas) to the result of the previous TU:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, [_texture name]);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
//add the previous color information with the texture's color information
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_ADD);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
//don't effect the alpha channel, use the result (GL_PREVIOUS) of the previous texture unit
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA);
I never had done multitexturing in opengl before, so I had to learn the concept. If you understand the blending functions with the frame buffer, then multitexturing will be easy. The difference is that you can combine the results of texture units. Since we have two TUs we can make the first texture blend with the frame buffer, and then overlay the second texture by adding it to the result of the previous TU (GL_PREVIOUS).
You dictate how the TUs will combine by specifying whether GL_COMBINE_RGB is GL_MODULE, GL_ADD, GL_DECAL, and GL_REPLACE. One should take a look at what each one will calculate to, and also experiment a bit with them.
Here is more detailed information about texture combiners using a fixed pipeline, and what it would like if we used shaders. I must admit that shaders are easier to read, at least such simple shaders, but shaders are only supported in iPhone 3GS.
Before setting up the multitexturing, I set up the texture coordinates for both TUs:
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2, GL_FLOAT, sizeof(VertexDataMultiTextured), &vd[0].uv0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(2, GL_FLOAT, sizeof(VertexDataMultiTextured), &vd[0].uv1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
Then I set up how the TUs should behave. In my case I had a texture for the button (in an atlas) and used the following openGL commands
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [_texture name]);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
//blend the texture with the framebuffer(GL_PREVIOUS)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_BLEND);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
//use the texture's alpha channel
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA);
//------------------------
And then I selected the second texture unit and added the color information of the glow texture (which was in the same atlas) to the result of the previous TU:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, [_texture name]);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
//add the previous color information with the texture's color information
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_ADD);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
//don't effect the alpha channel, use the result (GL_PREVIOUS) of the previous texture unit
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA);
Friday, July 31, 2009
Always initialize your variables... never assume default value
I had a problem which was going to drive me insane. The app on the simulator worked fine, but not on the device. I introduced depth in the prototype I'm working on, where buttons need to rotate and look 3d-ish. So when I added the z value, initially this was going to be zero. So I assumed that when I'm creating a variable it was going to be zero by default. I was wrong. It is only zero by default on the simulator. On the device it won't be, at least for floats. The result was a psychedelic trippy 3d-effect, which were ugly to say the least.
Now I know why Java pesters the programmer to not allow him to use variables which have not been initialized.
Now I know why Java pesters the programmer to not allow him to use variables which have not been initialized.
Thursday, July 30, 2009
Copying files to boot camp windows drive
I discovered that by default I can only view the files availabe on the Windows boot camp drive when I'm in Mac OS X. I needed to put some files on my Windows drive before booting Windows 7 for a while. After a quick google I discovered an article which mentions NTFS-3g that will allow you to write to any NTFS drive. Apparently it uses MacFUSE underneath.
Tuesday, July 28, 2009
LLVM/CLang static analyzer minor problem with iPhone SDK3
The solution is to just tell scan-build to use gcc 4.2.
The command would then be something like this
scan-build --use-cc=/usr/bin/gcc-4.2 -k -V xcodebuild
The command would then be something like this
scan-build --use-cc=/usr/bin/gcc-4.2 -k -V xcodebuild
Audacity audio caf file problem with iPhone SDK
Today I came across a problem which wasted me an hour of my life! I wanted to add new sound files to my prototype. And I downloaded Audacity to convert some sound files. I exported them as uncompressed CAF files. I had read somewhere that the iphone supports some specific types of encoding, two of which are the U-law and the A-law. I tried both from within Audacity but they didn't won't to play on the simulator/device. But all was good when I saved them as Wav files. Not sure why the CAF files didn't work. Someday I have to dig this problem a bit deeper. For now, I'm happy with the placeholder sounds :)
Bought xp-dev.com SVN service subscription
I required another repository as I'm doing a quick prototype of a very simple game I had done once with a friend of mine. I have decided to buy a subscription for xp-dev.com, the online SVN service. It has worked pretty well so far and when you upgrade they provide a backup service as well, besides no limitations (diskspace increased to 2g), and multiple repositories. 40$ is a good price for hassle-free SVN service which works without any problems in XCode.
Monday, July 27, 2009
gluLookAt for iPhone
On the iPhone SDK we don't have the glu utility library, so there is no glu functions. The gluLookAt is a very helpful function for conceptualizing a camera. I needed the same behavior since I needed to add some depth. So no more glOrthof, but glFrustumf did not cut it.
After googling a bit I found some gluLookAt implementation which is working wonders.
After googling a bit I found some gluLookAt implementation which is working wonders.
Saturday, July 25, 2009
No problems with Boot Camp and Windows 7
The hard disk partitioning and installation of Windows 7 through Boot Camp was smooth as silk. No problems whatsoever. Having said that, my main OS of choice will be Mac OS X. Windows 7 will just be there mainly for some unforeseen peculiarities with some windows file, and minor gaming. Minor because I only got the intel GMA 950 on my macbook, so I'm lucky if I can play Quake3 and Halo.
Friday, July 24, 2009
Preparing to install Windows 7 RC on macbook
Before I try installing Windows 7, I decided to make a backup of my macbook using Time Macine. I tried using iTimeMachine to backup over the network. This small app will simply enable TM to be able to use some network shared folder on my Windows macine. After using this small utility I found out that you could easily do a command in the Terminal as will be shown in the link below.
I came across 2 problems when trying to use my Windows machine as a sort of Time Capsule. The first was that I needed to create a sparce bundle myself using Disk Utility and copying it over to the remote folder. That wasn't difficult to do as such as higlighted in the above link
However the second problem wasted me a lot of time. When TM was starting to do the intial backup it would stop complaining about a problem with the network username or password. This didn't make sense at all as I could browse the folder with Finder using the same exact password. Apparently the solution was to reboot my mac and remove all keys to my windows machine (not just the Time Machine System key). Not sure about the reboot whether that helped or not. Right now I'm waiting for TM to finish so I can start installing Windows 7 RC.
I came across 2 problems when trying to use my Windows machine as a sort of Time Capsule. The first was that I needed to create a sparce bundle myself using Disk Utility and copying it over to the remote folder. That wasn't difficult to do as such as higlighted in the above link
However the second problem wasted me a lot of time. When TM was starting to do the intial backup it would stop complaining about a problem with the network username or password. This didn't make sense at all as I could browse the folder with Finder using the same exact password. Apparently the solution was to reboot my mac and remove all keys to my windows machine (not just the Time Machine System key). Not sure about the reboot whether that helped or not. Right now I'm waiting for TM to finish so I can start installing Windows 7 RC.
Keyframe animation
I had already done some keyframe animation code in java for Swing 'n Strike. I decided to migrate some of the code. The keyframe animation classes consisted mainly of two classes, a Keyframe class and a KeframesCollection class. Each instance of a Keyframe contains information about the position, transparency, frame number/time of the keyframe, what kind of interpolation to use with the next keyframe (linear,easing in/out, etc). The interpolation method I had in java, used to give back an instance of a Keyframe containing the interpolated values. But with Objective-C (actually, because of C), now I could keep a reference to a primitive value, e.g. any int, any unsigned char, any float, or any point... and the interpolate method would directly manipulate that reference. Awesome!
It's quite powerful and easy to use. Just create a keyframes collection. Give an address to what it will be modifying, give it some keyframes, and in the update method, we play the keyframe animation.
It's quite powerful and easy to use. Just create a keyframes collection. Give an address to what it will be modifying, give it some keyframes, and in the update method, we play the keyframe animation.
Wednesday, July 22, 2009
Faking transparency without alpha channel
I brushed again the gl Blending Functions (glBlendFunc) and I discovered you can fake transparent textures without having an alpha channel. You can have a simple RGB texture with a black background (like a lens flare for example) and then set the blending function to GL_ONE, GL_ONE (Some also make the source factor GL_SRC_ALPHA but since there is no alpha, might as well do it GL_ONE). So the source (pixel which is going to be output) factor and destination (pixel in the existing color buffer) factor are both 1. So if the source (texture) pixel is black (0, 0, 0), and the destination (frame buffer) pixel is red(1, 0, 0), the result would be red(1, 0, 0), i.e. nothing will change in the destination (frame buffer)... more formally the result is Sf * Sp + Df * Dp, where:
S = source
D = destination
p = pixel tuple
f = factor
Using our example, Sf and Df are 1 and we have black (0, 0, 0) and red (1, 0, 0)
1 * (0, 0, 0) + 1 * (1, 0, 0) = (0, 0, 0) + (1, 0, 0) = (1, 0, 0)
Using another example
1 * (0.2, 0, 0) + 1 * (0.4, 0, 0) = (0.6, 0, 0)
So it's just adding them together. If it's completely black, it's as if it is invisible.
Of course for accurate transparencies we need the alpha channel and use GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA so that they are blended together properly. If the alpha of the texture pixel is 0.2f then it would be
0.2 * (0.2, 0, 0) + (1-0.2) * (0.4, 0, 0) = (0.04, 0, 0) + ( 0.32, 0, 0) = (0.36, 0, 0)
S = source
D = destination
p = pixel tuple
f = factor
Using our example, Sf and Df are 1 and we have black (0, 0, 0) and red (1, 0, 0)
1 * (0, 0, 0) + 1 * (1, 0, 0) = (0, 0, 0) + (1, 0, 0) = (1, 0, 0)
Using another example
1 * (0.2, 0, 0) + 1 * (0.4, 0, 0) = (0.6, 0, 0)
So it's just adding them together. If it's completely black, it's as if it is invisible.
Of course for accurate transparencies we need the alpha channel and use GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA so that they are blended together properly. If the alpha of the texture pixel is 0.2f then it would be
0.2 * (0.2, 0, 0) + (1-0.2) * (0.4, 0, 0) = (0.04, 0, 0) + ( 0.32, 0, 0) = (0.36, 0, 0)
glTexEnvi and GL_REPLACE
So I wanted something really simple. A textured object (with alpha channel) and I wanted it to fade it out, and perhaps color it at run time (instead of creating different colored textures).
I knew that I should just give the object texture coordinates, and color information (besides vertex positions). Simple but it wasn't working. It was like it was not taking into consideration the color info at all. So I played around with a demo which I use as my playground and the same code worked. I was going nuts and it was obvious I was missing something. I was looking at the gl commands in the demo, and I had done all of those - enabling blending, enabling textureing, enabling vertex/texture/color arrays, blending function etc. Still nothing. It then clicked that by default it must be working automagically in the demo, and I must have some line of code which is changing the default behaviour and I spotted that line of coding which was setting the texture mode in glTexEnvi to GL_REPLACE which basically will discard the color information. I had completely forgotten about what that line was doing. Ah well...
I knew that I should just give the object texture coordinates, and color information (besides vertex positions). Simple but it wasn't working. It was like it was not taking into consideration the color info at all. So I played around with a demo which I use as my playground and the same code worked. I was going nuts and it was obvious I was missing something. I was looking at the gl commands in the demo, and I had done all of those - enabling blending, enabling textureing, enabling vertex/texture/color arrays, blending function etc. Still nothing. It then clicked that by default it must be working automagically in the demo, and I must have some line of code which is changing the default behaviour and I spotted that line of coding which was setting the texture mode in glTexEnvi to GL_REPLACE which basically will discard the color information. I had completely forgotten about what that line was doing. Ah well...
Sunday, July 12, 2009
Objective-C introduction
I have also found this quick tips for Objective-C at Cocoa Dev Central. Wish I had found this earlier :)
Subscribe to:
Posts (Atom)