Yesterday I tried to create an Adobe AIR project with a background image.   I have resized the image (bg.width = 1024; bg.height = 768) and it looks jagged and with the blank area on the left. May I know why this happen?

Siew Wei

If there is a blank area on the left, I’m pretty sure you missed out this code:-

stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;

Resize the image BEFORE you use it in ActionScript. Jaggedness is expected to happen when you resize it in ActionScript. Usually, when we resize an image, we would apply a blur filter afterwards.  But ActionScript doesn’t do that itself – as there are some situations where it might not be what you want.  So if you want to anti-alias any jaggedness – you’d apply the filter yourself.

I know that you’re writing an app for both tablet devices and smaller mobile screens. The plan is to work from a resolution of 1024×768 for the iPad/Android Tablet. Resize down to 960×640 for the iPod/iPhone, or whatever the resolution of the Android phone is. There will be jaggedness, but perhaps it won’t be noticeable on a smaller screen. But if you see jaggedness, apply a BlurFilter(). (maybe try stage.quality=StageQuality.BEST; … but I don’t think this will make a difference).

How do I put graphics on top of the map?


Make a nested Class which uses canvas graphics to draw a marker of the map. This class extends map.Overlay as shown. We’ve put in a few extra lines to onCreate to associate the overlay with our map.

public void onCreate(Bundle savedInstanceState) {
	MapView mapView = (MapView) findViewById(;
	mapView.getController().animateTo(new GeoPoint(
		(int)((N_DEGREES+N_MINUTES/60+N_SECONDS/3600) * 1000000),
		(int)((E_DEGREES+E_MINUTES/60+E_SECONDS/3600) * 1000000))

	MyLocationOverlay myLocationOverlay = new MyLocationOverlay();
	List list = mapView.getOverlays();

	LocationManager lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE);
	lm.requestLocationUpdates(LocationManager.GPS_PROVIDER, 1000L, 500.0f, this);

protected boolean isRouteDisplayed() {
	return false;

public void onLocationChanged(Location location) {
	if (location != null) {
		double lat = location.getLatitude();
		double lng = location.getLongitude();
		p = new GeoPoint((int)(lat * 1000000), (int)(lng * 1000000));

class MyLocationOverlay extends {
	public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) {
	super.draw(canvas, mapView, shadow);
	// Converts lat/lng-Point to OUR coordinates on the screen.
	Point myScreenCoords = new Point();
	mapView.getProjection().toPixels(p, myScreenCoords);
	Paint paint = new Paint();
	canvas.drawCircle(myScreenCoords.x, myScreenCoords.y, 16, paint);
	canvas.drawText("You are here", myScreenCoords.x, myScreenCoords.y, paint);
	return true;

You might notice we haven’t defined ‘p’. Nest this class inside your map code, with location updates, and p will be defined globally, of type GeoPoint.

I’m writing a game where a spinning top moves around a circular area consisting of tiles.  (Each type of tile will have different properties).  I want to arrange the tiles in concentric circles.  How do I make this?  (By the way, I’m looking for a way to write it to the iPhone as well as Android).


If I were writing this project, I might consider writing it in Pure ActionScript / AIR for mobile, rather than writing it in two or more Native SDKs.  Furthermore, ActionScript 2D graphics is both feature packed, and easy to use.

Even if you wanted to stick to Native SDKs (for possible reasons of speed and memory efficiency) – you could still generate the tiles using ActionScript, and convert them to image files.

But given that your game consists of primarily one moving object – I think ActionScript/AIR is the way to go.

So, How do you draw a circle?  The standard equation for a circle is:-

x = r Cos(theta),   y = r Sin(theta)

I’m going to use bezier curves to construct the circular segments.  And you mentioned that each tile is made of a different material… this is where features of ActionScript graphics come into their own.  I’m going to use a bitmap fill.

package  {

import flash.display.Sprite;
import flash.display.Graphics;
import flash.display.StageAlign;
import flash.display.StageScaleMode;

import asfiles.SpinningTopBackground;

public class SpinningTop extends Sprite {

protected static const TO_RADIAN:Number=Math.PI/180;
protected static const RADIUS:Number=16.0;

public function SpinningTop() {
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.NO_SCALE;

new SpinningTopBackground(this,384/2,512/2-64);

package asfiles {

	import flash.display.Sprite;
	import flash.display.Graphics;
	import flash.filters.BevelFilter;

	public class SpinningTopBackground extends Sprite {

		protected static const BRICKS:Class;

		protected static const GRASS:Class;

		protected static const METAL:Class;

	protected static const MATERIALS:Array = [BRICKS, METAL, GRASS];
	protected static const OUTLINE:uint = 0x333333;
	protected static const TO_RADIAN:Number=Math.PI/180;
	protected static const RADIUS:Number=40.0;

	public function SpinningTopBackground(screen:Sprite,xx:Number,yy:Number) {
		for(var i:int=0;i<3;i++) segment(this,0,i,3);
		for(var j:int=0;j<6;j++) segment(this,1,j,6);
		for(var k:int=0;k<9;k++) segment(this,2,k,9);
		for(var l:int=0;l<14;l++) segment(this,3,l,11);

		filters=[new BevelFilter(1.0)];

	protected function segment(graf:Sprite,i:int,j:int,segs:Number,Material:Class = null):void {
		const st:Number = 12;
		var segangle:Number=360/segs; //Math.pow(3,i+1);
		var coshalf:Number=1/Math.cos(TO_RADIAN*segangle/2);
		var o:Number;
		if (i==0) o=0.0; else o=-TO_RADIAN*20;,OUTLINE);
		if (!Material) MATERIALS[Math.floor(MATERIALS.length*Math.random())]).bitmapData);
		else Material().bitmapData);*RADIUS)*Math.sin(o+TO_RADIAN*j*segangle),(st+(i+1)*RADIUS)*Math.cos(o+TO_RADIAN*j*segangle));*RADIUS)*Math.sin(o+TO_RADIAN*j*segangle),(st+i*RADIUS)*Math.cos(o+TO_RADIAN*j*segangle));*(st+i*RADIUS)*Math.sin(o+TO_RADIAN*(j+.5)*segangle),coshalf*(st+i*RADIUS)*Math.cos(o+TO_RADIAN*(j+.5)*segangle),(st+i*RADIUS)*Math.sin(o+TO_RADIAN*(j+1)*segangle),(st+i*RADIUS)*Math.cos(o+TO_RADIAN*(j+1)*segangle));*RADIUS)*Math.sin(o+TO_RADIAN*(j+1)*segangle),(st+(i+1)*RADIUS)*Math.cos(o+TO_RADIAN*(j+1)*segangle));
		if (i==0) {
		} else*(st+(i+1)*RADIUS)*Math.sin(o+TO_RADIAN*(j+.5)*segangle),coshalf*(st+(i+1)*RADIUS)*Math.cos(o+TO_RADIAN*(j+.5)*segangle),(st+(i+1)*RADIUS)*Math.sin(o+TO_RADIAN*j*segangle),(st+(i+1)*RADIUS)*Math.cos(o+TO_RADIAN*j*segangle));

Could you advice how to set a canvas array to store the notes that user key in? I am trying to change my coding to the SurfaceView, and use the surface holder.


I liked your prototype, but the displayed musical scores were too small to accurately drag notes onto.  I suppose it depends how nimble your fingers are, but I’ve noticed that the HTC Android touch screen doesn’t feel as accurate as my Apple device.  So I think the editable music score area should be bigger than the displayed music.  I imagined panning along the music score, and selecting a portion to edit, which is rendered larger at the bottom of the screen.

I like SurfaceView, but in this case I think a normal View/invalidate() would be better.  The screen isn’t constantly animated.  Only when you edit notes.

In my implementation, I store the notes numerically in the mNotes array.  0 (zero) corresponds to f, 1 to e, 2 to d, and so on.  -1 corresponds to g.  Integer.MAX_VALUE means there is no note at that position.

Rather than dragging notes onto the score.  Just touch the score, and a note will appear at that position.  Then drag vertically into the right position.  Any note can be edited after it has been placed.  To remove a note drag up or down beyond the editable region.


package com.danielfreeman.musicalscore;

import android.content.Context;
import android.view.MotionEvent;
import android.view.View;
import android.view.View.OnTouchListener;

public class EditBar extends View implements OnTouchListener {

	protected static final int BACKGROUND_COLOUR = Color.WHITE;

	protected static final int LINE_COLOUR = Color.BLACK;
	protected static final int NOTE_COLOUR_OFF = Color.DKGRAY;
	protected static final int NOTE_COLOUR_ON = Color.BLACK;

	protected static final float LINE_SPACING = 20.0f;
	protected static final float TOP = 128.0f;
	protected static final float MARGIN = 16.0f;

	protected static final int NOTES_IN_BAR = 10;
	protected static final int NOTE_LOW = -4;
	protected static final int NOTE_HIGH = 12;

	protected static final float NOTE_X = 3;
	protected static final float NOTE_Y = 6;
	protected static final float STALK_UP = -64;
	protected static final float STALK_DOWN = 68;
	protected static final float SEG = 16;
	protected static final int MID = 4;

	protected Paint mScorePaint = new Paint();
	protected Paint mNotePaint = new Paint();
	protected int [] mNotes = new int[NOTES_IN_BAR];
	protected boolean mDragging = false;
	protected int mLastIndex = -1;

	public EditBar(Context context) {
        for (int i = 0; i<NOTES_IN_BAR; i++) mNotes[i] = Integer.MAX_VALUE;

    protected void onDraw(Canvas canvas) {

    protected void initialiseColours() {

    protected void drawScore(Canvas canvas) {
    	for (int i=0; i<5; i++) {
    		float y = TOP + i * LINE_SPACING;
    		canvas.drawRect(MARGIN, y, getWidth()-MARGIN, y+1, mScorePaint);

    protected void drawNotes(Canvas canvas) {
    	 for (int i = 0; i<NOTES_IN_BAR; i++) {
    		 int position = mNotes[i];
    		 if (position!=Integer.MAX_VALUE) {
    			 drawQuarterNote(canvas, i, position);

    protected void drawQuarterNote(Canvas canvas, int index, int position) {
    	float x = indexToX(index);
    	float y = positionToY(position);
    	if (position<0 || position>9) {
    		float y0 = positionToY(2*(position/2));
    		canvas.drawRect(x - SEG, y0, x + SEG, y0+1, mNotePaint);
    	Path path = new Path();
    	path.moveTo(x + 3*NOTE_X, y - NOTE_Y);
    	path.cubicTo(x + 5*NOTE_X, y, x-NOTE_X, y+2*NOTE_Y, x-3*NOTE_X, y+NOTE_Y);
    	path.cubicTo(x - 5*NOTE_X, y, x+NOTE_X, y-2*NOTE_Y, x + 3*NOTE_X, y-NOTE_Y);
    	canvas.drawPath(path, mNotePaint);
    	canvas.drawRect(x + 3*NOTE_X - 1, y - NOTE_Y/2, x + 3*NOTE_X + 1 , y - NOTE_Y/2 + ((position > MID ) ? STALK_UP : STALK_DOWN), mNotePaint);

    protected float indexToX(int index) {
    	return MARGIN + (index + 0.5f)*(getWidth()-2f*MARGIN)/NOTES_IN_BAR;

    protected float positionToY(int position) {
    	return TOP + position * LINE_SPACING / 2f;

    protected int xToIndex(float x) {
    	return (int)Math.floor((x-MARGIN)/((getWidth()-2f*MARGIN)/NOTES_IN_BAR));

    protected int yToPosition(float y) {
    	return (int)Math.floor((y-TOP)/(LINE_SPACING/2));

    protected void editNote(float x, float y) {
    	int index = xToIndex(x);
    	if (index>=0 && index<NOTES_IN_BAR) {
    		if (mLastIndex!=index) mDragging = false;
    		int position = yToPosition(y);
    		int note = mNotes[index];
    		if (note == Integer.MAX_VALUE || note == position) mDragging = true;
    		if (mDragging) {
    			if (position>=NOTE_LOW && position<=NOTE_HIGH) mNotes[index] = position;
    			else mNotes[index] = Integer.MAX_VALUE;
    		mLastIndex = index;


    public boolean onTouch(View view, MotionEvent event) {
    	return true;

    public boolean onTouchEvent(MotionEvent event) {
        editNote(event.getX(), event.getY());
        return true;


The main program is as you’d expect.

package com.danielfreeman.musicalscore;

import android.os.Bundle;

public class MusicalScore extends Activity {
    public void onCreate(Bundle savedInstanceState) {
        EditBar editBar = new EditBar(this);

Note, there seems to be a problem with displaying the graphics properly in the 1.5 emulator. The 2.2 emulator works ok.

I’ve put the following attribute into the application tag of the manifest file to make the application full screen.


Yesterday, in about two hours, I probably presented enough information about games to fill an entire course.  (Hey!, that would be cool).  I also ended up inspiring MYSELF to write a new game.  I hope that some of you were inspired to create a game too.

I’d encourage you to have a go.  While I don’t pretend to be a guru in absolutely all areas of Android development (anyone who makes this claim is lying), I know quite a bit about games – and I can give you all the expert help you need.  Well, I can help with the easy bit – programming.  Design is another matter.  I’m just in awe of anyone who can come up with great-looking game characters and worlds.

There seemed to be some confusion when I talked about depth-ordering (z-ordering) in an isometric projection world.  After the presentation, I was troubled as to whether people had understood what I was talking about.  I wasn’t sure what was at the root of the misunderstanding either.  So that’s probably why my answers didn’t seem to resolve stuff for you.

I was talking about how to decide what was in front of what in an orthographic world.

The brute-force approach would be to calculate the distance from the observer, and sort the objects based on that.  But sorting can be computationally intensive.  Especially in an MMO (Massive Multiplayer Online) game where there may be many avatars in the scene, all moving around.

I presented a standard simplification strategy to z-ordering.  The one used by the OpenSpace Flash engine that I’ve used.

In this strategy we effectively number each cell in the way shown above.  Then we apply a rule something within a cell with a larger number is placed in front of something in a cell with a smaller number.

Note that cell 10 is actually further away from the observer than cell 9.  But this doesn’t matter because there wouldn’t be any overlap between the contents of these cells.

So, we spoke about this strategy working for graphical things in the same locality.  This is what seemed to confuse people.  Some of you thought of this as a limitation of the scheme, and seemed to be searching for a fix to overcome this limitation.  When in fact, the “limitation” never manifests itself in a way that the observer would notice.  There’s nothing to fix.

Maybe the misunderstanding lies with an understanding of what depth or z-ordering actually does.  Consider two shapes.  A blue square, and yellow circle.  If the shapes overlap, then we notice which shape is in front of the other.  We NOTICE the z-ordering.


In the first example above, the yellow circle is in front of the blue square.  In the second example, it is behind.  If these shapes were associated with cells in our orthographic grid, the shape in the cell with the highest number would be placed in front.  This works well for cells in the same locality.

But when the cells AREN’T in the same locality, the likelihood is that the images associated with these cells WON’T OVERLAP.


In the first example above, the yellow circle is in front of the blue square.  In the second example, it is behind.  Spot the difference?  There ISN’T ANY.  Not to the observer anyway.  If the image associated with cell 10 of the orthographic world gets placed in front of the image associated with the image associated with cell 9 (Event though cell 9 is closer to the observer).  No one will notice.  As far as the observer is concerned, everything looks right.  There’s nothing to fix.  The scheme works.

But when images associated with a cell occupy a larger area of a world, then it is possible for images that are not in the same locality to overlap.  Yesterday, I gave the example of a house, and how we carefully choose which cells to associate (register) the images to – so that our scheme still works.

Yesterday, someone asked about the case where the house had a low roof.  So an avatar seen at cell position 10 or 20 could be seen over the top of the building.

It was at this point that I realised I had REALLY confused you all.  I and couldn’t comprehend the root of this confusion to fix things.

Suppose we have an avatar behind the house, at cell position 10.  Suppose there is no overlap between the house and the avatar, so the avatar can be clearly seen.  It is not obscured in any way.  So it doesn’t matter whether the avatar is ordered in front or behind the house.

The blue square represents the avatar.  The yellow circle represents the house.  They don’t overlap.  In the first example above, the yellow circle is in front of the blue square.  In the second example, it is behind.  Spot the difference?


But actually, in the house example, an avatar at cell position 10 IS BEHIND the house.  Whether the avatar is obscured by the house or not.  It is further away from the observer, and our depth-ordering (z-ordering) scheme concurs with this – and places it behind.  There’s no conflict.  No problem.

I think some people thought I was describing some kind of hidden surface removal?  Or maybe object clipping?  I don’t know?

I wasn’t describing this at all.  I was just talking about what was behind, or in-front-of what.

If something is ordered behind something else, but it doesn’t overlap, and it can be seen clearly – there’s no problem.  Just because we’ve decided that it’s behind, we’re not removing it or making it invisible, or clipping it or anything.  It’s still there.  It can be seen.

Maybe just saying it was “behind” something, when there was no overlap was enough to confuse people yesterday?  I don’t know.

I still can’t understand root of the misunderstanding yesterday.  Hopefully, I’ve explained myself better in this blog.  If you still have a problem, please leave a comment.  I really want to resolve the misunderstanding – whatever it is.

It’s not possible to cover ALL of Android development in a week. While my course is action-packed, there are some subjects that I don’t have time to do justice to. But I often give my students a web link to a tutorial or online resource to follow up in their own time. I’ve just updated the entire list for the new courses we’re running around Malaysia. So I thought I’d put the list up here for everyone:-



XML Parsing


Upload/Download image files


3D games,312.html

Games engines

video tutorials and training


PhoneGap in Android




Augmented reality

Android app Inventor

Help forums

A couple of students need to take pictures, and store their pictures in the database.  This allowed them to easily associate (tag) other data to the each picture, and manage their pictures, access and remove them easily.  We store binary data in a database record using a BLOB.

The database helper class looks like this:-


import android.content.Context;
import android.database.sqlite.SQLiteOpenHelper;
import android.database.sqlite.SQLiteDatabase;

public class MyDBHelper extends SQLiteOpenHelper {

	final protected static String DATABASE_NAME="pictures";

	public MyDBHelper(Context context) {
		super(context, DATABASE_NAME, null, 4);

	public void onCreate(SQLiteDatabase db) {
		db.execSQL("DROP TABLE storedImages;");
		db.execSQL("CREATE TABLE storedImages (_id INTEGER PRIMARY KEY, image BLOB, tag TEXT);");

	public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
		if (oldVersion >= newVersion)


I put the application inside a TabActivity. Although the camera preview gets a bit squashed up this way.  So the UI needs a bit of thought.  Nevertheless, here is the main Activity code, based on Marakana’s camera tutorial:-



import android.content.ContentValues;
import android.database.Cursor;
import android.database.sqlite.SQLiteDatabase;
import android.hardware.Camera;
import android.hardware.Camera.PictureCallback;
import android.hardware.Camera.ShutterCallback;
import android.os.Bundle;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.FrameLayout;
import android.widget.ImageView;
import android.widget.TabHost;
import android.widget.TextView;
import android.widget.TabHost.TabSpec;

public class HelloCamera extends TabActivity {

  protected Preview preview;
  protected Button buttonClick;
  protected MyDBHelper myDBHelper;
  protected TabHost tabHost;

  public void onCreate(Bundle savedInstanceState) {


    tabHost = getTabHost();
    newTab("gallery", null,;
    newTab("camera", null,;

    preview = new Preview(this);
    ((FrameLayout) findViewById(;

    buttonClick = (Button) findViewById(;
    buttonClick.setOnClickListener(new OnClickListener() {
      public void onClick(View v) {, rawCallback, jpegCallback);

    myDBHelper = new MyDBHelper(this);

  protected void newTab(String label, Drawable icon, int page) {
  	TabSpec tabSpec = tabHost.newTabSpec(label);

  // Called when shutter is opened
  ShutterCallback shutterCallback = new ShutterCallback() {
    public void onShutter() {

  // Handles data for raw picture
  PictureCallback rawCallback = new PictureCallback() {
    public void onPictureTaken(byte[] data, Camera camera) {

  // Handles data for jpeg picture
  PictureCallback jpegCallback = new PictureCallback() {
    public void onPictureTaken(byte[] data, Camera camera) {

    SQLiteDatabase db = myDBHelper.getReadableDatabase();
    ContentValues values = new ContentValues();
    values.put("image", data);
    db.insert("storedImages", "tag", values);;

  protected void readDatabase() {
	  TextView info = (TextView) findViewById(;
	  SQLiteDatabase db = myDBHelper.getReadableDatabase();
	  Cursor cursor = db.rawQuery("SELECT * FROM storedImages ;", null);


	  if (cursor.getCount()>0) {
		  ImageView image = (ImageView) findViewById(;
		  byte[] data = cursor.getBlob(cursor.getColumnIndex("image"));
		  image.setImageBitmap(BitmapFactory.decodeByteArray(data, 0, data.length));


I use a db.insert() method to put the jpeg data into the database.  Note that the readDatabase() class is very simple.  Although it retrieves the entire database, it only displays one image.

Finally, here is the Preview class:-


import android.content.Context;
import android.hardware.Camera;
import android.hardware.Camera.PreviewCallback;
import android.view.SurfaceHolder;
import android.view.SurfaceView;

public class Preview extends SurfaceView implements SurfaceHolder.Callback {

  SurfaceHolder mHolder;
  public Camera camera;

public Preview(Context context) {

    // Install a SurfaceHolder.Callback so we get notified when the
    // underlying surface is created and destroyed.
    mHolder = getHolder();

  // Called once the holder is ready
  public void surfaceCreated(SurfaceHolder holder) {
    // The Surface has been created, acquire the camera and tell it where
    // to draw.
    camera =;
    try {

      camera.setPreviewCallback(new PreviewCallback() {
        // Called for each frame previewed
        public void onPreviewFrame(byte[] data, Camera camera) {
    } catch (IOException e) {

  // Called when the holder is destroyed
  public void surfaceDestroyed(SurfaceHolder holder) {
    camera = null;

  // Called when holder has changed
  public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {


I haven’t included the layout file, as it probably needs rethinking, so you can probably come up with a better UI.  At the moment it’s wrapped up in a TabHost, a preview area (FrameLayout), and includes a Button called ‘buttonClick’.