Sunday, September 23, 2012

Jelly Bean on the GNex

By some strange miracle, the Android Jelly Bean update was released for the Verizon Galaxy Nexus today.  For the last 2 months I never felt the usual (and completely shameful) anxiety I get when waiting for an Android update to be approved by Verizon.  Rather, I was entirely relaxed, perhaps because Ice Cream Sandwich is still good enough on its own.  However, once I heard the news, I went nuts.  All after noon I tried to force the update to push using various tricks I read about online.  A terrible idea, mind you, and a waste of time, so I bit the bullet an attempted Plan B - "Installing the update manually".

I used to do this all the time with my old Droid 1.  Someone in the community would grab the update file, which you loaded onto the SD card. From the bootloader, you could then install the update and reboot.  It was simply, easy, and you didn't need to mess around with rooting the device.

But that was back in the Wild West days of Android, when the bootloader was wide open.  These days, most devices lock their bootloaders, though some, like the Galaxy Nexus, protect it with the equivalent of a screen door.  That is to say, it was made by design to be incredibly easy to unlock.

At least, that's what everyone in the community claimed.  Yet when I first looked for instructions on how to do it, I came away more confused than before.  Everyone recommended downloading a variety of user made tools that would unlock and root the device for you.  As I mentioned in my last post, I'm not a big fan of running software like this without knowing what it is doing.  For me, this isn't an option.  Personal choice aside, however, I failed to understand how unlocking the GNex could be considered easy when it required community enthusiasts to come up with the tools to do it?

I knew there was another way.  There had to be.  Why else would people say it was designed to be unlocked?  I confirmed my suspicion fairly quickly.  At least one tutorial on the subject referred to an alternate, more difficult method of unlocking, one which involved using the Android SDK.  Suddenly it all made sense.

Here's the scoop, so far as I can tell.  If you have the Android SDK, you have all the tools needed to unlock the bootloader on the GNEX.  With one command, you can reboot it into recovery mode, and with another you can issue the unlock command.  Done and done.  Installing the manual update file for Jelly Bean does require from outside help, in the form of a custom recovery tool like Clockwork Mod Recovery.  Thankfully, Clockwork is a well known and reputable piece of software, so I wasn't afraid to use it.  Moreover, once I ran it, I could immediately tell what its purpose was. It looks just like the bootloader/recovery program from the OG Droid, the one I used to use.  My only issue with Clockwork was getting it running.  From what I can tell, you can flash it onto the phone using the SDK tools, but it seemed to go away once the phone restarted.  If that's the case then I'm even more pleased, since it means I can load it up for one time uses when need be. 

When all was said and done, I had the official Jelly Bean update on my phone, and I learned a valuable lesson.  What most Android enthusiasts considered "hard mode" is actually very easy if you're the kind of person who 1) isn't afraid to install the SDK, 2) knows how to do it, 3) did it already for actual development, use and 4) understands what it does.

Moreover, I discovered that while the actual developers in the Android hacking community are far smarter than I, the guys who are obsessed with unlocking/rooting/flashing ROMs are not necessarily so.  Guess I have to trust my instincts a little more.

Saturday, September 08, 2012

Rolling Your own (source code)


When I was first learning how to use Linux, package managers were a godsend.  I still remember the first time I saw someone use apt to download some needed software.  Here was an OS that was free as in beer, free as in speech, and was backed by multiple mirrors of hundreds of software packages which will instantly configure themselves on your system.  For a moment I felt like I was living in the future.  In reality, I was living in a world where the dominance of Windows blinded me to the fact that the cutting edge of computing existed elsewhere, and it was awesome.  On a more practical note, the ease of use of Debian packages made it much easier to set up a working Linux environment. Dare I say that if it weren't for package managers, I would have never considered Linux to be anything more than a curiosity.

Nowadays, I don't use packages that much.  As the years went by, I found myself being forced to build various pieces of software from source.  At first I simply pasted commands into a shell, but eventually I grew to understand what those commands meant, until I finally got to the point where I could grab a tarball and install it without any guidance.

Then something else happened.  Building from source went from being an option, to being my preference.  And it's becoming true more and more with every week.

My current view is that Package Managers are indispensable in at least two situations.  The first is if you're a newbie, or if you use Linux for basic computing tasks.  The second is if you are running a server in an enterprise environment, where your system needs to stay up to date, and simply cannot break.  For someone who falls into these camps, I'd say you're crazy not to rely on packages as much as possible.

For developers and power users, however, I think the negatives of package managers become more severe.  Namely, they introduce a  lack of control and transparency to your system, leading to problems that can waste as much of your time as a finicky source bundle.  For instance, what if you want a certain feature to be enabled, or disabled?  With the precompiled package, you have no say.  On the same token, what if your distro's package repository is slow to update to the latest version of a programming language?  These concerns don't affect everyone, but when they do, it can be incredibly frustrating.

In regards to transparency, consider Synaptic Package Manager, which doesn't tell you what gets installed, or where it gets installed to, until the package is actually on your system.  This is counterintuitive to me.  The package is installed without asking you for a destination.  That means it must know where it is supposed to go.  Why can't I see that ahead of time?

Furthermore, I can't shake the feeling that the dependency lists for some packages could be leaner than they are.  Far too many times have I installed what looked to be a simple package, only to find that it has to bring twenty friends along, some of which look completely unrelated.  Back when I (admittedly foolishly) tried to run Ubuntu on five gigs of disk space, I'd fill it up in a blink, until I went in and uninstalled a half dozen kernel images and a list of libraries as long as my arm.  This problem plays into a more general attitude among developers which assumes that disk space is trivial (for more examples, see Maven, Ivy, rubygems, and other programming-language related package managers).  Even when it is, I don't want to fill my HDD with more data than it needs, nor do I want to spend time pruning it months from now.

For all these reasons, I've grown to rely on building from source almost exclusively.  It lets me know exactly what's on my PC, and where it is located.  For example, I find that it is easier for me to keep track of binaries if I install them to  /opt, rather than spreading them out between /opt/, /usr/, and /usr/local.  It also lets me install multiple versions of software, and change between them as needed.  To be fair, there are pitfalls with this approach.  Some things don't build nicely, especially on OS X.  Thankfully, most of my pitfalls have managed to double as learning experiences, and I hope that the lessons learned will make future installations smoother.

But there's one thing about building from source which bothers me more than anything else.  I'm starting to get fanatical about it.  When I tried to get Ruby setup last month, I found myself increasingly frustrated with the community's insistence on using Macports or Homebrew to grab needed pieces.  I felt allergic to the voodoo that the various Ruby management systems (such as RVM) practiced behind the scenes.  I even chaffed at Google's insistence on making the pre-compiled version of Go a .pkg installer, which placed everything in /usr/local whether you like it or not.  The way I saw it, the people who have an interest in installing Ruby don't fall into either of the two camps which benefit from Package Managers.  Abstraction is good in programming, but abstraction at this level can lead to someone working with a tool that they don't really understand.  If it ever happens to break, they would find themselves at a loss at what to do.  As I mentioned before in regards to Ruby, convention over configuration can only go so far.  Sooner or later, you need to look under the hood.  Once you get over the initial hurdles, and your system is suddenly working exactly the way you want it, the feeling you get is nothing short of triumphant.

Again, I think my own stance is too strong.  It perceives the world as too black and white, and it does not factor in any number of edge cases.  Ultimately, it doesn't really matter whether or not you use package and roll your own, as long as you can do the stuff you want to do.  It's not worth it to get worked up over these kinds of debates. Not when there are stupid projects to undertake, like building Linux from Scratch.

Friday, August 24, 2012

Ruby Update

Remember that post I wrote about my adventures in installing Ruby?  The one I JUST wrote?  Turns out all of my struggles were unneeded.

You see, one of the reference books I was using is called RailSpace, and it attempts to teach already experienced programmers how to use Rails by building a small social networking site from scratch.  I thought it was a great concept, and I enjoyed what I was reading.  The book's use of Ruby 1.8.5 and Rails 1.2 was the driving force behind everything I slogged through.  What's worse, after finishing that last post, I still couldn't get through the book due to version mismatches between Ruby/Rails/Rubygems/whatever else.  I was completely hosed, even in triumph.

Yesterday, I googled the RailsSpace book to look for guidance.  Often times programming books have a dedicated websites with updates, sample code, and sometimes even message boards.  I thought I might find that someone had the same struggles years ago when the book was new.

Guess what the first Google result is when you search on "RailsSpace"?  It's called the "Ruby on Rails Tutorial", which alone doesn't sound like much.  What it REALLY is is an updated version of the RailsSpace book, created by one of its authors, Michael Hartl.  While it isn't entirely the same, it still has you making a social networking site using Rails, and it is written in the same style and pacing as the book.  I've already begun to work through the first two chapters, and I've yet to encounter a single hiccup.

If I had known about this site a week ago, I could have saved so much time and stress.  If I had to pick a silver lining out of the storm clouds, I'd say that I still learned quiet a lot in my failures.  I just don't know if it was worth the overall time spent.

No use crying any more over spilled milk I guess.  Time to do something productive.

Sunday, August 19, 2012

Ruby on OS X

A few weeks ago I decided to try (once again) to learn Ruby and Rails.  In the past, I never got far in this endeavor, on account of getting confused and/or bored. This time, I haven't gotten far because I haven't been able to get the damn language working the way I need it to.

Allow me to elaborate. I am using several pieces of dated but still useful reference material.  The good thing is that they were all published around the same time, meaning they all use the same (or similar) version of Ruby, Rails, and Rubygems.  The bad thing is that all of these tools have received major upgrades in the years since.  Version mismatches usually lead to incompatibilities, and sure enough, the commands listed in my Rails book are flat out broken when used with Ruby 1.9 and Rails 3.  I needed the old stuff, and I needed it configured properly.

My quest to to achieve this perfect development environment was a monumental pain in the ass, redeemed only by the fact that it taught me a ton about Ruby, the Ruby community, and the innards of OS X.

My first major hurdle came while trying to compile Ruby 1.8.5 from scratch.  It turns out that as of Mountain Lion (or maybe Lion, I forget), OS X has no pure version of GCC.  It's all LLVM, with either a Clang or GCC front end.  Suffice to say that only the most bleeding edge version of Ruby compiles nicely with these tools.  Everything else simply craps out at one point or another.   At this point, I decided to look for some alternatives.  RVM is a popular tool for managing different versions of Ruby, so I tried to give it a shot.  It failed spectacularly.  Unbeknownst to me,  RVM installs different versions of Ruby by compiling them.  Obviously it failed just as hard, no matter which version I tried to install.

I went back to the Internet to look for solutions.  The most common suggestion was to install Apple's pre-Lion version of GCC4.2 using Homebrew.  I don't like Homebrew (or Macports or Fink), and I wasn't excited about using it just for the compiler I needed.  So instead I found a wonderful guide to building GCC from scratch on OS X.  The instructions worked perfectly, and they left me with a working, up to date compiler that was installed to a sane location.

I didn't know it yet, but at this point, I had all the tools I needed to get Ruby up and running.  Yet I still struggled when building from source.  One issue is that the Ruby source has some glaring issues (at least in version 1.8.5).  For example, I found one file that had #define where it needed #defined,and another which tried to pass a const char* to free() (which, at least on my system, requires a void* argument.  Then there was the file which defined two constants which already existed in OS X's version of stdlib.h.

The second issue was getting make to use my hand build version of GCC.  It was easy enough to put it on my PATH, but that wasn't enough.  Some parts of Ruby's build process use the generic
cc command, and on OS X this is a symbolic link to /usr/bin/clang.  I only discovered this last part today.  A week ago, I had what looked to be a working version of Ruby which failed to install gems.  When I googled the error messages, all of the hits were concerning the aforementioned issues with compiling Ruby using LLVM and Clang.  I kept thinking "but I'm not using them.  I have GCC4.7.  I can see that it's being used by make. "  Only it wasn't being used by make. At least, not during the entire build process.  When I finally changed the cc symlink, the build finally succeeded.

Just kidding.  It failed again.  For whatever reason, several of the modules in the ext/ folder of the source tree had empty Makefiles.  I'm still unsure as to how this happened, but the issue went away after running make distclean, and then another make.  This time, I had a fully working copy of Ruby 1.8.5.  After that, everything fell into place.  Rubygems. Rails 1.2.  It's all working fine, as do the book examples.

At first, my takeaway from this ordeal was that Apple was doing a fine job fucking up OS X as a legitimate environment for programmers. By the end of it, my tune changed.  I still don't like the loss of GCC (and now I won't have to worry about it either), but if FreeBSD can get their system running with Clang and LLVM, I hesitate to say that either tool is a problem.  Rather, the issues seemed to stem  from the way Ruby is/was written.  I'm not saying it isn't worthy of being used in production environments, but I don't expect (and don't encounter) such sloppiness in any other major programming language.

I also learned a lot about the Ruby community.  Specifically, I decided that the concept of "convention over configuration" espoused by Rails developers extends into the community at large.  Did you know that RVM will change the way the cd command behaves on your system?  Even if the change isn't problematic for 99.9999999% of users, it is still something I'd like to be told about at installation.  I also grew frustrated at how so many of the proposed solutions to my problems boiled down to using a tool that would automagically download and configure the tools I needed in order to build and install the tools I wanted to use.  In other words, they wanted you to paste a few commands into a shell and hit enter, regardless of whether or not you knew what it would do to your system. To be fair, this isn't exclusive to the Ruby community, but I grow tired of it in general.

Don't worry though; I still have a Ruby specific gripe.  Before I installed Rubygems, I had a sneaking suspicion that the latest version would not be compatible with old versions of the language.  This is in fact true, but good luck finding this information out on the Rubygems homepage.  It may very well exist, but the only place I found the answer was an old question on Stack Exchange. Without that hunch, who knows how much more time I would have spent struggling.

Ruby'ers - I get it.  I really do.  Configuration sucks.  It's boring, error prone, and compared to writing actual code, is no fun at all.  But dammit, sometimes it's necessary.  If you ignore configuration, you will have no idea where to look when something goes wrong.  And despite the fact that these languages and frameworks ask us to trust them, they can and will make mistakes.  The only way to deal with them is to know what the heck is going on in the first place.

Sunday, August 12, 2012

Core Image Fun House on Mountain Lion

All in all, I'm torn on what I think about OS X Mountain Lion.  If I'm lucky, I'll write a more detailed post explaining my personal pros and cons, but this post has to do with one of my biggest cons - the loss of Core Image Fun House (which I just wrote about in my last post).

The problem stems from Xcode 4.4.  It seems that Apple made some major changes to the IDE since the last version I had, including removing the /Developer folder, which was where Fun House, along with several other utility apps, resided.  The other utilities were retrievable through optional downloads from Apple's Developer site, but none of them seemed to contain Fun House.  I scoured Google for some sort of answer, and was able to confirm that it is, in fact, not being offered with newer versions of Xcode.  I still have no idea as to why, nor have I found anyone else who's wondering where it went (which was a reminder that the Internet doesn't always pounce upon every single change in the state of the world).

Thankfully, I did manage to find a solution. Fun House is an incredibly simple program, and it looks like Apple used to include it as sample code back with Leopard.  The code is still archived, meaning you can build it yourself a fresh copy of the executable.  Here are some general steps on how to do so:

1) Download the source code from the above link (look for the button labeled "Download Sample Code").

2) Unzip the archive, and open the project in Xcode (you should just be able to select File -> Open and open the folder itself).

3) Once the project is opened, it isn't runnable right away.  If you're on Mountain Lion, you're going to have project configuration errors to resolve.  If you click on the yellow exclamation point in the top-center of the IDE, it should bring up a list of errors in the left hand column.  Here's an image of what you should be seeing:

4) Double clicking on the second of the two errors will allow you to fix the settings.  You'll get a popup stating all of the changes Xcode will make.  It will look like this:
Click "Perform Changes".  If you get a message about enabling snapshots, choose whatever option you want.

Once the changes are made, the code should be runnable, with one final caveat. You have to run Fun House as a 32 bit app, rather than 64.  Most likely it'll be set to 64 bit by default, and you'll need to change that. In the top left corner, near the run button, you should see the something labeled "Fun House -> something something 64-bit".  Click on this to change it to 32 bit.  Then you can hit the run button.  If all goes well, Core Image Fun House will launch.

If you get a successful launch, you can move on to the final step - creating a .app file that you can run when you're not in Xcode.  In the left hand column, if you select the little folder icon, you'll get a resource view of your project.  Under the products folder, you should see the .app file.  Right click it, select "Open in Finder", and you can then copy the file to wherever you'd like.

Tuesday, July 24, 2012

Changing Icon Colors


Recently I was looking for a way to change the color of an application's icon in OS X.  The idea I had was to run two separate instances of Eclipse - one for Android development, and one for other work (this is probably unnecessary, but I've already done the work).  In order not to confuse the two instances, I wanted to change the color of the Eclipse icon, so that the Android installation would be colored Android green, to tell them apart.  The only trouble is that I'm terrible with anything related to graphics or image processing, so a I had no idea where to start to make this idea a reality.

Fortunately, I stumbled upon the solution.  If you install the OS X developer tools, you get a program called Core Image Fun House.  You can use it to apply all sorts of effects to an Icon (.icns) file, including changing the color.  Here's what I did:


  • Find the folder where your Eclipse installation is.
  • Right click on the Eclipse.app file and choose "Show Package Contents".  Now you'll be navigating within the package, to get into the guts of the application.
  • Open the "Contents" folder, then the "Resources" folder.  You should see a file named "Eclipse.icns".  That's your icon.  Make a copy of it somewhere, and open the copy up in Core Image Fun House.
  • Core Image allows you to add filters to the icon in question. Click on the plus sign in the "Effects Stack" menu to bring up all the filter options.  Under the "Color Effect" section, you should find a filter called "Color Monochrome".  Apply this filter, and you'll see a color picker in the Effects Stack menu.  Use this to change the color of the entire icon.
  • When the icon looks right, go to File -> Export.  Choose a name and a save destination.  Lastly, change the file type from "JPEG Image" to "Apple Icon Image".
  • Take the newly saved .icns file and drop it into the resources folder of the Eclipse package (this will overwrite the original, so you may want to make a backup of it first).  If you do it right, OS X will recognize the new icon and replace it just about everywhere (you may have to add a new copy of the icon to the dock, if you have a permanent dock icon).

Saturday, June 23, 2012

Android Development: How to get GPU Acceleration for your AVDs

I was quite excited when I first read the news that the Android Emulator now supports using your PC's GPU for better graphics performance.  Trouble is, the aforementioned announcement doesn't contain any information on how it works.  I only discovered the answer thanks to Google's revamped developer site, wherein I stumbled across an emulator setup guide I had never seen before (I'm assuming that it actually was there before, and that it isn't a brand new page.  I could very well be wrong on that).

Anyway, the answer is found in the settings page for your virtual device. In the hardware section, you can click "New", and choose the "GPU Emulation" property.  Set it to yes, and your graphics card will be put to work (provided it is supported; I'm not sure how well it works in all environments).

Click the 'New' button in the 'Hardware' section, and you'll be able to add GPU emulation as a property (It's shown at the bottom of the property list here).
So far, the results are impressive.  The virtual device still takes a while to load up (though not as long as it did when I first tinkered with the SDK, years ago), but once it's on, it is very smooth and responsive, enough so to make it worthwhile for testing.  This could make it much easier to test future projects on multiple device types.

The guide above also explains how to configure Virtual Machine acceleration, using the VM extensions  supported by modern processors.  This is another welcome feature for improving emulator performance, but it looks more complicated to setup.  Namely, you have to enable the VM extensions for your CPU via the BIOS, and install extra virtualization drivers.  The drivers are the real showstopper, as the guide warns that they can conflict with the drivers for other VM software like Virtualbox.  The fix is to only enable one set of drivers at a time - this is easy enough in a *nix environment, but I'd have to go and find out what Vitrualbox installs.

Once you've got your drivers under control, there's one last caveat; to use VM acceleration, your virtual device has to be running an x86 CPU, rather than the traditional ARM chip.  This isn't really a problem - Intel has already put out an x86 based phone, and the support is there.  You just have to remember to take the extra step, because these AVDs go to ARM by default.

If I dip my toes into this CPU virtualization, I'll come back here with my findings.

Sunday, June 17, 2012

Notepad App: The Rest

With the Scratchpad finished, the next step was to get the regular notepad functionality working.  Since I already had a working example, all I had to do was fit it into my existing codebase.  This turned out to be fairly easy, and I had a half-working example after no more than an hour's work.  Getting it working well was another matter entirely.

For one, the text on the main screen was small, and only the text itself was responsive to touch input.  If the title of a note was small, it was almost impossible to open (and if there was no title, it was completely inaccessible).   This fix taught me just how important it is to understand the built in Android layout rules.  The "wrap_content" rule, for instance, can be a bad choice for setting the width of a view. So can "match_parent" if you aren't aware of what the parent view's width is set to.  This is yet another area in which blindly following sample code can get you in trouble, when all of a sudden your GUI is lined up all wrong and you don't know why.  I ended up re-evaluating the width and height of every View in every XML file in the app, since this forced me to justify each and every setting.  By the end, I had everything looking the way I wanted it, and more importantly, I understood why.

My initial porting attempt also broke a small (but handy) feature from the sample Notepad app; if there were no saved notes, a special "no notes" message was displayed instead.  In my app, however, this message was displaying even when there were notes present.  It turns out that this feature requires your app to be designed in a very specific fashion, which happened to be the case for the sample app, but not mine.  Specifically, the Activity in question needs to:

  • Subclass Android's ListActivity class.
  • The layout file associated with the Activity needs to contain one ListView and one TextView, and they must be id'ed using two of Android's standard View names - "list" and "no_notes".

Looking at my app, I knew that my layout file used different id names, and that my class only extended  Activity. An easy fix to be sure, but there was another problem - could I still cram the Scratchpad 
button onto the page?  If I added it to the layout file, would it display?  And even if it did, would the ListActivity accomodate the code responsible for handling click input?

The answer to both of these questions is "yes" and "yes".  By using a ListActivity, you benefit from having a built-in ListView object to use, but you aren't married to it.  Remember that you can technically associate any layout file to a ListActivity.  It helps if some of the Views defined in the layout are structured in such a way to work well with the class, but even then you can still add more on top of it.  Then, once you add extra buttons and gizmos, you simply write a click handler for each one. For me, this translated into an extra layout entry (for the Scratchpad button), and a single, small click handler.  Nice and easy all around, and I learned a ton about how views and activities relate to each other.


After that, it was just a matter of making a few final touch ups, such as having the cursor appear at the end of the title when opening a note, and giving notes a default title when none is entered. All told, this phase of the work took almost as long as the last, but I got far, far more done.

I'd like to end this entry with a list of additional lessons learned.   All told, this was a very rewarding experience, and I feel more motivated than ever after achieving this small success.
  •  For the first time, I used version control on a personal project.  I'm using git, as I'm used to it from work, and it worked like a charm.  When I tried to refactor the app to use a ListActivity, I created a new branch: it was nice to know that, should it not work, I could revert back to a working state.
  • I'm planing on going back to this app at a later date, to make some improvements. Specifically, it could use better error handling, and it would be nice to have more flexible controls for adding/deleting/editing content.  Perhaps even a home screen widget for displaying the Scratchpad.
  • I ran into an issue using StringBuffer; namely, I was trying to reuse the same StringBuffer instance without cleaning it out, so it kept preserving its previous content.  Ideally, what I should have been doing was creating a new object every time, but sometimes I still think like a C coder, and I considered that a waste of memory.  Yeah yeah, there's garbage collection, I know, but wiping the buffer was trivial, and now I feel like I can go to sleep tonight.
  • Next up, a tasks app.  It'll be far more complicated, but even more useful too.

Saturday, June 16, 2012

Notepad App: Scratchpad

I've finally a useable part of my personal app.  The Scratchpad function is working fully.

I made a serious mistake when working on this part, by trying to store the Scratchpad data in a SQLite database.  Such a DB makes sense when you are storing multiple notes, but the Scratchpad is just one big note, a text dump for storing quick thoughts that I don't want or need to create a full, separate note for.  There's no need to wrestle with database queries and row integrity.  I lost several hours struggling to get things working, to the point where I almost gave up.

Thankfully, I sat back down and researched alternative storage methods.  After all, this kind of data could easily be stored in as a text file, and as it turns out, Android has tools for doing this.  You can create storage files that are viewable and readable only by the app itself.  Using this technique, I got the Scratchpad working very quickly, and while I am happy it works, I'm also more frustrated with the fact that I'll never get those wasted hours back.

This reveals a major flaw in my skills as a programmer.  I rely on sample code too much, to the point where it holds me back.  I needed to store data; what I should have done is look through the reference documents for ways to do this.  Instead, I looked through the samples I had previously read through.  Since all of these samples were dealing with ListViews, they all used SQLite tables.  It wasn't the only storage choice available, but it was the only one I was exposed to.  Thus my tunnel vision set in.

The other problem with using sample code is that sometimes the only thing it can teach you is how certain parts of the API work within the context of this specific example.  Unless you can read the code and understand how to use a featured method  in any given situation, then you don't really understand how to use it.  If you then proceed to copy and paste that code into a completely different context, what will you do if it breaks?  Scramble, of course, because you don't know enough about how it works.

I made these mistakes with the Scratchpad, and for my own sake, I need to learn from them if I am to get the remainder of the app completed without experiencing this level of frustration.


The Moral of the Story: Use sample code to get a feel for how a complete program reads and flows.  But when it's time to do something yourself, figure out what it is you want to do, and look through the API to find pieces that will help you do that.  Work with them until you find out what it really does, and if it isn't suiting your purposes, start looking again.  By the time you have a working prototype, you will be able to explain why it works (and if it turns out there's a better way you could have done it, don't worry.  That can come in a new version, when your skills have improved).

Notepad App: Intro

My first attempt at an Android App is to make a notepad application.  As it turns out, the Android sample code has a fully functioning notepad in its sample code folder.  It's simple, but if all you want is to write and save notes, it's perfect.

However, I want to do slightly more than that. Namely:

- I want to be able to write individual notes, with titles, that I can save (this is what the Android sample project already offers).
- I want to have a single, giant note, called the "Scratchpad", which I can access quickly, and fill with quick bits of text.  The point here is not organization, but to allow me to just get things down before I forget.

In theory, this should be a simple job.  Just take the existing example, add a new button on the screen for the Scratchpad, and an additional, super basic text editing window for modifying it.

Let's see just how simple it will be for me in practice. Consider this a developer diary of sorts.

Sunday, May 27, 2012

Android bloggers

For me, by far the worse aspect of the Android ecosystem is the community of bloggers surrounding it.

On one end of the spectrum, you have the professionals.  These folks are technically proficient, so their sites are at least partially useful.  But being professionals, they absolutely need to attract (and maintain) a large readership.  What that usually translates into is an extreme focus on only the latest and greatest devices.  When you constantly remind people that they are behind the times (even if their phone is brand new), it messes with them psychologically.  They will perpetually lust for gizmos which are out of their reach, and so they will constantly revisit these sites to get their fix.  When people get into this gadget frenzy, they forget that the bloggers themselves live in a tech writer bubble that in no way reflects the habits of the average user.  They get demo units and early access; they often have a new primary phone every month (or every other week!).

I find that these professional sites a great resource when you're in the market for a new phone, but once you find the right one, there's no good reason to come back.  You're not going to find much in the way of tips and tricks for your new hardware, and after a couple of months, you might not even get timely news in regards to OS updates.

On the other side, you have the amateur/barely professional bloggers.  These sites are run by writers whose bylines state that they've had an Android phone for a year or less.  They refuse to do anything resembling journalism.  Their news pieces are instead culled from rumors, speculation, and bullshit.

To give you an example, do a search for news on the 4.0.4 software update for the Verizon version of the Galaxy Nexus.  It's been mysteriously absent for months, and no one's sure what's happening, or when it will arrive as an Over the Air update.  But if you do a Google news search, you'll find tons of Android sites stating that the update is already out on OTA.  Their proof?  Other shitty Android sites, who in turned linked to yet another.  Follow the trail far enough, and you'll realize that there was never any concrete evidence supporting these claims.  It appears as if the entire community is linking to each other in a neverending circle of BS.

Staying with the Nexus, the other typical news piece announces that the update is available directly from Google, and is ready to be installed manually.  You just need to make sure you unlock your phone.  And root it.  Or maybe you don't need to root it.  Who knows, because everyone gives a different set of instructions on how to apply it (and their comments sections are filled with people asking why it isn't working).  Most of these sites have no idea what they're talking about, and I find myself uncomfortable trusting anything they provide, be it news, tips, or a download link.

It's unfortunate, because it means that a lot of the potential inherent in the platform will be inaccessible to a huge number of users who simply don't want to be given the runaround.  Whenever I see someone express their fears of rooting their phone, I completely understand where they're coming from.  With some of these sites, it would be akin to driving your sedan into a shady chop shop for repairs.

Sunday, May 20, 2012

Android Development

Last winter I was inspired to try my hand at iOS development. The experiment didn't last very long.  I just couldn't get the hang of any of its key components.  The Xcode IDE, the Objective C language, none of it clicked.

In a similar fashion, I recently tried to get into Android phone development, but this time I'm actually making progress.  I can think of at least one obvious reason - thanks to my work experience, I'm already familiar with Java development and the Eclipse IDE. The question on my mind is just how much of a difference this makes.

For example, regardless of the language being used, I feel like Google's documentation is better than Apple's.  But is that really accurate?  Or is it simply that the Android tutorials were easier to get through because I wasn't also trying to learn a new language? On the same note, I feel like working with Eclipse was much simpler than Xcode.  I know why I like using Eclipse, but is Xcode really as bad as I think?  I know my perspective is clouded, but I can't tell if it is warping my expectations.I'm not surprised that working on iOS proved harder at first, but did I give up too quickly because it wasn't exactly what I'm used to?  In other words, I don't want to give up on something unless I know it really, truly isn't clicking with me.

All I can do is describe the differences in my two experiences.  If you're out there, feel free to tell me where I'm being too harsh or too lenient on either platform.

IDE's - XCode VS Eclipse

Xcode has a weird way of switching between contexts.  One minute the interface looks like this:

And the next, like this:

and sometimes there seems to be no obvious way to switch between one and the other. Technically, this isn't much different than how Eclipse displays its features across multiple perspectives (each with different visual configurations), but the saving grace of Eclipse is that it never takes away your control.  You can add, remove, and relocate a feature to any part of the IDE that you want, and you can always access a list of available perspectives.  In Xcode, I would find a way to go back to a previous screen layout, only to find it was missing a window somewhere.  Where did it go?  Did I follow the right workflow?  I have no idea.

Additionally, XCode is laid out very much like iTunes.  Aside from the sense of visual unity, I don't think there is any benefit to this decision.

Documentation

The documentation for the two platforms differ in both content and ideaology.  Here's a clip of Apple's "Getting Started" page for iOS development:


Set aside the fact that this looks like a generic search result page, and notice that the first document listed covers such topics as App Store submission.  This is supposed to be developer documentation - monetization shouldn't be the first priority for such an audience.  

Furthermore, the subsequent topics each tackle a very narrow aspect of App development.  It's important to understand Networking and Data Management, but until you understand the general structure of an application, how will you know where (and how) to apply these topics?

Ultimately, I gave up on relying on exclusively on Apple's provided documentation, and started looking elsewhere.  I found that no matter where I looked, every guide on iOS development shared an obsession with MVC design model. Let me be clear - there's nothing wrong with MVC.  In fact, it is crucial for the purpose of most apps.  But the thing about design models is that they're guidelines.  There aren't many hard and fast rules governing them.  This means that different people will interpret them differently, and and there's no way to establish a definitive, "right" way of doing something.  Breaking up your code according to the MVC model can have its benefits, but it won't magically make your code work better or run faster.  But iOS developers don't seem to agree with me on this, and so they focus on adhering to the model at the expense of explaining the actual code.  A typical line from a tutorial would read like this:
We need to start off with our Model, so here's some code that will create one for us.  Don't worry too much right now about what this all means, just know that this is how you set up Objects/variables/messages/etc in Objective C.
If there's one objective observation I can make, it is that these iOS tutorials are at a disadvantage, in that they assume (understandably) the reader is not well versed in Objective C.  They're trying to teach two things at once, and since most app developers are more interested in making piles of cash money than in learning the nuts and bolts of a programming language, the details of ObjC are deferred on as much as possible.  That being said, it is important to know why you are doing something the way you are, even if you don't entirely know how it works.  The sample applications I read through were made purposefully over complicated, and by the end of each I had no better handle on Objective C than I did before I started.

Google's Android documentation takes opposite approach.  It is clearly by developers, for developers.    If you work through the beginner guides, you'll barely see a word written about MVC.  When writing a basic app, there's not enough code to justify separating the model from the control, and the view is little more than a few lines of XML. The authors of the dev. docs understand that there's little reason to introduce MVC until there's an actual benefit to using it.  Until then, they are more than happy to help you build a simple app from the ground up.  Here is a snippet of the Android Development site:


Notice that the focus is entirely on how to build applications, with additional info on individual topics.  If you take advantage of the Android dev. docs, you can get a solid foundation upon which you can add the specific features you need.  The progression feels entirely natural.

The major disadvantage of the dev. docs is that instead of assuming you don't know the language (like Apple does with Objective C), Google assumes you're already well familiar with Java (and in some cases, the concepts behind the Android SDK).  One early tutorial (a simple notepad app) namedrops both anonymous inner classes and the Dalvik VM.  If you know what any of this means, the tutorial is a joy to work through.  If not, you may find yourself completely lost.  This isn't necessarily a good thing, but at the very least there's no mistaking who the intended audience is (and I'd still argue that the articles do a pretty good job at explaining each method call, right down to what the arguments are for).  

Final Verdict

There are a lot of idealogical differences between iOS and Android, and it isn't surprising to see them bleed into the development practices for both platforms.  That being said, the situation with iOS isn't as bad as I've made it out to be.  Apple's introductory material may be general and useless, but the full documentation set manages to cover every concept with great detail.  If you're a serious programmer, you'll sort through it all if you really want to, and I think Apple knows that.  They don't take care of this audience because they know we'll take care of ourselves if we have to.

Sunday, April 29, 2012

Gunpla Chronicles - Upper Body

For reasons which will be explained later, I'm going to detail the remainder of the build process in this single post.  Apologies for the lack of photos.

Right Arm - I was a bit too tired when I assembled the right arm, and when I get tired, I tend to do things fast and sloppy.  Some of the pieces became warped at the cut points, though luckily most of these pieces were part of the exoskeleton, meaning they'd be concealed under the outer armor.  The rest of the assembly process went smoothly, which was a welcome state of affairs after the crisis I had with the leg joint.

My stickering got a lot better with the right arm.  I found myself coming up with a process; I would remove the sticker with the hobby knife, transfer it to the nail care stick, and apply to the model.  I then used the stick to reposition and set it in place.  All told, the stickering wasn't perfect (the larger decals on the shoulder mounted shield are a little off angle), but it was still a dramatic (and noticeable) improvement.

I did the entire arm in a single night, spending a little under three hours.

Left Arm


My cuts were a little less messy with the left arm, and my stickering was better still.  I'm especially proud of how I placed the decals on the spiky shoulder pad.  This is also the section where I got a lot better at sanding and filing.  I discovered that if I used a lighter touch, I could remove much of the discolored plastic. I also began to file along the length of the piece.  Previously, I'd put the tool at an angle, focusing it on the target area.  If I wasn't careful, this could cause even more damage.  I used my new skills on the scratched section of the Heat Hawk, and the results were more than acceptable.

Between the legs, arms, and weapons, this was my best work yet.  When I started writing these posts, I said that I didn't believe one could get significantly better as a builder after just one or two kits, but now I can see how wrong I was about that.  If you strive to do good work, and pay attention to your mistakes, you can make major strides after just one model.

This arm was also done in one night, though I can't recall how much time it took.

Chest


I felt more awake for building the chest than for either of the arms, yet I probably made some of the worst cuts yet.  Just like with the arm, a lot of them ended up being concealed, but there were more than enough on the visible sections.  I also got a little too heavy with the file, to the point where I started to file away otherwise good sections of a piece.  I don't think this is easily apparent anywhere on the torso, unless someone were to hold the model right to their eye.  Nevertheless, I must not repeat this performance in any future builds.

I did the chest over the course of two nights, and probably spent another three hours total on it.

Head


The head is the defining part of a Zaku, and I wanted to make sure I didn't screw it up.  I was careful to the point of paranoia, and it paid off in spades. I even managed to note and interpret the suggestion in the instruction book which said to set the red eye sticker before sticking the eye construction into the head proper.

In terms of perfection, I think this was ultimately my finest work (though due to the greater number of stickers, I'm still most proud of that left arm).  Build time was well under than hour.

 A note on topcoating


I used my entire can of paint to topcoat this model.  In fact, I barely had enough to give the head a once over.  This makes me wonder whether I didn't start with enough paint in the first place, or if I simply wasted a lot of it due to bad technique.  I don't think any of the parts wound up with a spotty topcoat, but I'd have liked to have gone over them one more time.

Gunpla Chronicles - Surgery Edition

In today's post, I'd like to show you my first near disaster.  Take a look at the following picture of the Zaku's right leg:


Part of the joint is pretty nasty looking, eh?  That's because it snapped off when I tried connecting the legs to the torso.

I'm not exactly sure how it happened - or rather, I know how it happened (I tried pulling it back out from the torso when it seemed to be in too snugly), but I'm not sure how it broke the way it did.

What I also know is that I almost made the situation go from "bad" to "unfixable". Note to any other potential Gunpla builders - try not to panic if something breaks, but if you can't help yourself, step away from the model until you can calm down.If you're panicked, you'll be pressed to find a solution as quickly as possible, and in such a state of mind, you likely won't come up with the best solution.  Also, your work will be rushed and sloppy, which can easily sabotage your plan.

This is exactly what happened to me.  My first reaction was to see if I can simply glue the piece back onto the leg.  I knew I would lose flexibility, but I was willing to sacrifice that if need be.  Unfortunately the glue wouldn't stick.  I thought it was because I was simply being impatient and not letting it set, so what did I do?  Continue to be impatient and not let it set.  The broken joint piece became encrusted by glue as the night went on, and I began to lose hope.

I was only saved by the wisdom of my wife, who pointed out that what I was using was not super glue, but "Extra Strength Adhesive".  She promised that if I got some actual super glue, the piece would hold.  Naturally I trusted her, and feeling better, I went to sleep for the night, determined to get a new tube of glue the next day.  In a narcolepsy fueled spout of heavy sleep, I had a lightbulb moment.  I woke up and went to test my theory, and sure enough I was right.  It turns out that certain pieces of sprue had the same diameter as the hole in which the broken joint piece plugged into the leg.  If I sanded off the broken part of the joint piece, I could super glue on a small piece of sprue, at which point the piece would be functionally good as new.  Then only problem remaining would be removing the plastic that was stuck inside the leg after the break.  Some further googling revealed that I had a drill bit of exactly the right size for the job.

I knew that if I took a drill bit to my model, I could very well break it for good. So I went online to find out if anyone else had the same crazy idea.  This is when I came across the concept of "pinning", in which you fix or reinforce a fragile joint by drilling a hole all the way through one or more pieces and inserting a support shaft.  With my theory confirmed, I knew my plan could work if only I was careful. I decided to turn the drill bit by hand, partly to be careful, and partly because I didn't have a working drill to use.  Suffice to say that after a few cuts and some very raw fingers, I did it.  The broken plastic fell right out, and I was left with a perfectly ready joint.

This is the closest example I could find of what "pinning" is.


The next day, I got some new super glue, and by sheer coincidence I also found a tube of the Extra Strength Adhesive on the shelf.  Looking at the package, I saw that it was not meant to bond to plastic.  My wife was right; I was using the wrong stuff from the very start.

The super glue, on the other hand, worked faster than I expected (I almost got my fingers stuck together). It bonded the joint to a small piece of sprue, and I filed it down to the right size. I plugged it back into the leg joint, and voila - a perfectly working leg, albeit one that was ugly as sin.

I'm incredibly happy that I managed to fix my problem and repair my Zaku, but I wish I never put myself into this position in the first place.  This incident was an important reminder that these kits are not toys.  It is crucial to be slow and careful when moving any joint on any piece, no matter how sturdy it feels.  A broken part isn't worth a spat of impatience.


Gunpla Chronicles - Torso and Weapons

At this point, the torso is built, but not fully assembled (I have to clearcoat them first).  For my standards, I did a near flawless job with the stickers, but the plastic itself got roughed up a bit much.  Here's a bit of scuff on the right side:


Aaaaaand the left side:


The only major screwup with the stickers can be seen in this pic. On the white sticker on the bottom, there's supposed to be a shiny gold part near each tip of the 'V' shape.  On the right side, I accidentally cut that part away.  This mistake bugs me the second most of any of them so far.


The most annoying mistake belongs to the weapons.  I built them out of order on account of them being so simple.  It was a nice way to wind down the evening, but look what I did to the Heat Hawk:

If the clearcoat doesn't cover up some of that scratch, I'll have to attribute it to "battle damage".  On a side note, I kind of wish that light grey blade on the Heat Hawk was colored translucent red.  Whenever it is wielded in the shows, it gets red hot, and I think it'd be a cool effect (maybe one day I'll try and add that myself with some paint).

The rifle and bazooka came out mostly unscathed, but here are some pics for completion:


The next step for me is to coat all these pieces and put them together.  I'll probably start the next post with those results, followed by the building of the upper body.

Gunpla Chronicles - Beginning the build

Before chronicling the build process, here's a list of tools and supplies I'm working with.

Sprue Cutter - I actually found a sprue cutter in Hobby Lobby, so I decided to grab it rather than relying on nail clippers for cutting out pieces.  So far, I'm happy with the decision.  They work really well as long as I'm careful.  I'm sure the clippers would function well enough, but I feel like they would also be less intuitive.

X-acto Knife - In all honesty I didn't need this, but it cost three bucks, so it wasn't really a splurge.  I've used it to remove certain pieces in lieu of the Sprue cutter, and for cleaning up some of my shittier cuts.  I've also found that some of the kit's joint pieces are kept rigid via very small pieces of connecting plastic that only the knife can really remove.  I've also found it to be useful for taking stickers off the sticker sheet.

.02mm art marker - It isn't a Gundam Marker, but it's acid free and wipes off clean, so I think it'll suffice.  It looks like this was one of the thinnest markers available, and yet it is still a bit too big for some of the panel lining I've done.

Cotton Swabs - I'm using these for removing mistakes made with the marker.

Nail care stick - I learned that you can use a toothpick to better position stickers on a model.  Instead, I'm using nail care sticks.  They're longer and sturdier, making them easier to wield, and rather than having two pointy tips, the bottom of the stick is a flat end.  This makes it a 1-2 punch for sticker application; one side positions, while the other sets in place.

Super glue - I had a bottle already, and I keep it handy just in case.

Nail clippers - I still found an old pair which I'm also keeping handy.  I occasionally use the nail file attachment to clean up excess plastic.

Actual nail file - Same use as above.  I think it's a bit too heavy in grit, so I have to be careful.

Hobby Tweezers - My wife insisted that I buy these, instead of using facial tweezers.  She says they're better for the task.

Testors Matte style spray lacquer - Most builders say that even if you don't paint your kit, you should still spray it down with clearcoat paint.  It removes the toylike finish to the plastic and can hide certain mistakes.  Most builders also recommend specialized brands, but I just went with basic Testors.  I'll report on my findings later.

Now, without further ado, the build....

I started where the manual told me to - the right leg. It took me three and a half hours on my first night of building, plus an unknown amount of time to apply the stickers later on.  I definitely made some mistakes on this one, mostly in regard to the stickering.  These decals are incredibly tiny, making them tough to remove, tough to manipulate, and tough to position. I didn't have a good feel for getting them on, and as a result I lost a few to the carpet, while others were applied with wrinkles.  Some I took off myself after coming out terribly.  My other big mistake was in not letting my panel linings dry completely before continuing my work.  The marker started to smear where I touched it, and some of it came off underneath the stickers, giving them a permanent blackish hue.  Nothing I can do really.

On the other hand, I'm rather proud of how well I did with cutting.  I didn't leave behind too many noticeable marks, and those I did make are hidden underneath the model's exterior armor.

In regards to filing, I shied away from it.  The two filing tools I currently have seem a bit too aggressive, and can really scratch up the plastic.  Since I'm not painting the kit, I won't be able to go over these mistakes, so in the future I'll have to be more precise with where I use them.


As you can probably tell from the pic, the sticker on the knee portion is janky, as are the two on the foot.  Also, on the front right part of the toe there is some damage from cutting, though in this picture it is obscured by shadow.  

With the left leg, my fortunes reversed.  I ended up acting much quicker during the cutting phase, and wound up with more poor cuts than I did with the right leg.  I tend to do everything fast and sloppy when I'm tired, and I believe I was a bit too sleepy when I tried to do the work (also, the beer I drank got to me a lot more than the one I had on my first night).  Here's a good example of where my sloppiness is clearly visible; the left circular vent thingie is nice and scratched:
 
On the other hand, I got into a nice groove with the stickers, and for the most part they all came out a lot better.  Here's a comparison of the backs of the legs to give you an example (notice the gold stickers in particular):

The right leg was the first piece of the kit I gave the clearcoat treatment to, and if you look at the photo above, you can kind of see the difference in the shine and coloring of the two legs.  So far, I like the results. 

The next post will cover building the torso and weapons.



Gunpla Chronicles: About Gunpla

When I returned home, the first thing I had to do was pour over the Gunpla scene and determine what it is I got myself into.  The news was almost entirely good.  Most modern model kits do not need to be painted, nor do they need to be secured using glue or modeling cement.  The only thing stopping them from being build-able "out of the box" is the fact that you still need a handful of tools to put them together correctly.

The other good news is that, contrary to my fears, the Gunpla scene is a healthy mix of casual hobbyists and hardcore builders.  This is a big deal for me; the internet communities surrounding most hobbies are made up entirely of enthusiasts, and enthusiasts lack the perspective needed to help beginners.  They'll recommend setups which require significant investments in time and money, without any regard to the fact that a newbie is probably looking to get their feet wet with something small and simple, to see if they're really interested before they lay down any serious cash.

This is not the case with Gunpla.  There are still super fans with potentially damaging advice for newbies (for a good example, look up Danny Choo's posts related to the topic), but I found far more guides and tutorials warning rookies not to jump into the deep end.  Some of their aggregate suggestions include:


  • Don't start off with a Master Grade or Perfect Grade kit (but if you do, it probably won't kill you).
  • Don't buy every single tool someone recommends.  Buy a few basics, and try to rely on things you have lying around the house to fill in the gaps.  Become familiar with these simple tools before moving on up to something better.  To give a more concrete example, a newbie can get away with using nail clippers and a nail file to snip out and clean up pieces, and once they get a feel for it, they can choose to upgrade to a sprue cutter and sandpaper.  No one I found (aside from Choo, the moron) insisted that you run out and get yourself a spray gun and air compressor for painting.  
  • You need to take your time.  Not many people in the community seem to be impressed by speed.  It is better to go slow and wind up with a great looking piece.
  • Most tutorials pointed out that newbies will mistakes (some writers went so far as to show examples of their early screwups).  Their suggestion is to learn from them, and then move on (after trying to salvage your model of course).  The overall vibe I got was that no one expects your first kit to be flawless, so don't worry if it isn't.
  • I found a few forums which which looked very supportive.  When someone showed off a finished kit, they were quick with pointers and comments, but they were also happy to see folks of beginner or intermediate skill actually finish a build.  It felt like these users wanted to help their colleagues get better and become more involved in the hobby, as opposed to erecting a wall of impossible standards that only a dedicated few could climb over.  To give an example, I saw a husband and wife team show off their build of a High Grade Gundam AGE model, considered one of the best starter kits available right now.  The feedback in the comments section was enthusiastic and positive.  It was almost hard to believe.
As for the negatives of the scene, they were fewer, but still present:

  • Most Gunpla fans import both kits and equipment from Japan.  That means they tend to recommend tools and materials which are specifically made for Gunpla.  These include special Gundam Markers meant for filling in panel lining on the kit's armor, as well as special glues and clear coat paints.  There are only two ways to obtain these goods - from an online retailer, or "your local hobby shop", which for most people is something which only exists in fairy tales.  Since most online shops sell the kits and the accessories, you can probably get all this stuff from one place (except for the paints.  Those spray cans apparently can't be brought into the States anymore).  Otherwise you'd have to buy them piecemeal.  I know it sounds odd, but I don't like buying small, inexpensive merch off the Internet.  It seems like a waste of shipping and handling, and I don't like the notion of registering for a store solely so I can spend five dollars.  I'm sure all these specialized tools are fantastic, but I've decided to take my chances with whatever I can acquire at Michael's or Hobby Lobby.

  • On a similar note to the above, even the friendliest guides were against the use of plain, standard stickers on a Gunpla model.  Everyone seems to agree that they don't look very nice when applied.  Their recommendation is to use special "water slide" decals (I forget how these work) and rub-on decals, which as expected rubs the details right onto the plastic, like a rub-on tattoo. Some of the best quality kits come with these special decals; for the ones that don't, you have to go online and see if Bandai (or some other company) sells them for your specific kit as a standalone product. This is one of the only examples I've seen in which the community on a whole asks a bit too much from newbies.
    Lastly, some thoughts of my own:
    • I understand that you are supposed to learn from your mistakes and improve your skills with each kit.  But something about this feels counterintuitive.  Even the simplest kits aren't so cheap that I'd buy one to serve as a trash-able practice run, and when your first kit happens to be one of your very favorites (which mine certainly is), you don't want it to show off all your first time mistakes. The only way I see to remedy this would be to rebuy and rebuild it in the future when I know how to do a better job.  That's my mistake for not starting with something I care less about (the gift shop at Disney had a Real Grade standard Zaku, as well as some other suit from Gundam Seed that is wholly unfamiliar to me).

    • What's more, as cool as I find Gunpla to be, as a married man looking to start a family, I don't see a lot of display space in visions of my future homes.  I'm not sure how many kits it takes, on average, to get really good, but I can't imagine accommodating any more than half a dozen in my lifetime (unless I feel comfortable enough with myself to continue it as a hobby after retirement, in which case I could go nuts).  As a result, I want any kits I buy to be done right.
    • Despite what I said above, I've already made mistakes on the parts I've built due to acting too quickly and carelessly.  I can't even follow my own advice!

    Saturday, April 28, 2012

    Gunpla Chronicles: Intro

    While we were at Disney World, my wife insisted that I buy something "nice for myself" while I was there.  This was not an easy task for me.  As a child souvenirs were mostly off the table, and as an adult I'm still affected somewhat by that conditioning.  But even when I find something intriguing, I'm still held back by the fact that anything I'm going to find is either going to be a simple trinket, or a replacement for some household item I already have.  In my mind, a souvenir should be something special, something you keep for years as a symbol of all the good memories you made on your trip.

    In the end, I found something that was absolutely perfect.  It won't be used or broken, I'll see it all the time (thus reminding me constantly of the best vacation I've ever been on), and it is something that lines up perfectly with my interests.  I bought a Gundam Model kit.

    Never in my life did I expect to find something like this in a Disney park, but the gift shop in the Japan section of Epcot's World Showcase has all sorts of anime related merch.  I've never built a model kit before, and when I picked it up, I had no idea if I'd even have all the tools to do so.  But seeing it on a shelf, I knew I had to have it.  Nevermind the fact that model kits can be easily obtained on the Internet.  To someone outside the hobby, these things still feel rare and exotic. Finding one on a physical store shelf felt akin to unearthing buried treasure.  I vowed not only to buy one, but to built it at any cost.

    This series of posts chronicles my attempt to built the Real Grade Zaku II Char Custom.  I'll do my best to take quality photos of each section as they're built, but I make no guarantees about my photography skills.

    Sunday, February 12, 2012

    Javascript

    I'm currently trying to come to terms with the continued rise of Javascript in all areas of web development.  Nowadays Javascript can be used to drive your web application, your UI's nifty graphical effects, and even the web server itself. It is everywhere, and can seemingly do everything.  If you want to work on the web, you need Javascript in your toolbox.


    I find this to be disorienting to say the least. Back in the day, I was never a Javascript user, so I still see it as a toy language, something used to help validate forms and create annoying alert boxes.  I also remember its reputation as being the bane of performance and security minded users everywhere.  How is it that a language with so many negative traits become the de-facto future of the web?  Hell, how is it that such an old language became the future?  Usually, when developers want to solve a bleeding edge problem, they abandon the old and reinvent the wheel (even if it isn't necessary).


    Some proponents say that Javascript was always a useful language, but it took a long time for anyone to take it seriously.  I can actually buy this reasoning.  In our modern rush to make bigger and better web based applications, we often forget that traditional programming and website design are two separate skill sets.  Being good at one is no indicator as that you'll be good at the other.  In fact, I'd say in many cases, being good at one means you're probably terrible at the other (I know no one likes stereotypes, but look at the interface for any open source project that isn't large enough to have dedicated graphic designers.  It probably doesn't look pretty).  


    Thus for most of its life, Javascript was relegated to a realm in which the people who could get the most out of it were the least likely to use it.  But as everyone began to agree that the future was on the web, more programming wizardry was required.  And with Javascript being the most ubiquitous programming language of the web, it was eventually put through its paces.


    So in thinking it out, I see and agree that Javascript is a legitimate and powerful language.  But I still can't help but feel like it is being utterly misused, even in instances in which it has been put to good use.  Prior to this post, I spent some time reading up on the language, including skimming the Javascript: The Good Parts by Crockford.  It featured some easy to read, easy to grasp code examples which did much to prove the language's worth.  But when I look at JS libraries, such as Dojo, and JQuery (actually, no, JQuery isn't too bad), I see the opposite: ugly code which uses Javascript's weakly typed nature to screw with the syntax, as if each one wants to put their own unique stamp on the language and make it look like something entirely different in the end.  I wouldn't want to reinvent the wheel when a library could do the heavy lifting for me, and yet I wouldn't want to use a library that makes my code unreadable as Javascript.  


    And this is where I get to the root of the problem - real programmers have finally forced Javascript to flex its muscle, but I can't help but feel like most of the people using it aren't real programmers.  They're web guys who are more than happy to cut and paste someone else's code in order to get their frontpage to do flips and make AJAX calls.  It also seems to attract young and/or inexperienced programmers, who want to jump in with the hottest trend without having a solid understanding of the fundamentals.  And then there are the genuinely smart guys who are either misguided, have an ulterior motive, or aren't quite as bright as they appear.  Put all these camps together, and you wind up with something like node.js, which by appearances looks like a cult based on nothing but hype and circle-jerking.  


    I will lean me some Javascript.  But I'm going to do my damnest to learn it the right way, and only use it when I have a real use for it.  I'm not a wizard programmer myself, but if I'm going to step into the future, I want to do it the right way.