Image
  • Writing
    • Andy Gavin: Author
    • About my Novels & Writing
    • All Writing Posts
    • The Darkening Dream
      • Buy the Book Online
      • Sample Chapters
      • Reviews
      • Info for Reviewers
      • Press Coverage
      • Awards
      • Cast of Characters
    • Untimed
      • Buy Untimed Online
      • Book Trailer
      • Sample Chapters
      • Reviews
      • Info for Reviewers
      • Press Coverage
      • Awards
      • Cast of Characters
    • Scrivener – Writer’s Word Processor
    • iPad for Writers
    • Naughty Dark Contest
  • Books
    • Book Review Index
    • Favorite Fantasy Novels
    • Andy Gavin: Author
    • The Darkening Dream
      • Buy the Book Online
      • Sample Chapters
      • Short Story: Harvard Divinity
      • Reviews
      • Info for Reviewers
      • Press Coverage
      • Awards
      • Cast of Characters
    • Untimed
      • About the Book
      • Buy Untimed Online
      • Book Trailer
      • Sample Chapters
      • Reviews
      • Info for Reviewers
      • Press Coverage
      • Awards
      • Cast of Characters
    • Naughty Dark Contest
  • Games
    • My Video Game Career
    • Post Archive by Series
    • All Games Posts Inline
    • Making Crash Bandicoot
    • Crash 15th Anniversary Memories
    • World of Warcraft Endgames
    • Getting a Job Designing Video Games
    • Getting a Job Programming Video Games
    • Naughty Dark Contest
  • Movies
    • Movie Review Index
  • Television
    • TV Review Index
    • Buffy the Vampire Slayer
    • A Game of Thrones
  • Food
    • Food Review Index
    • Foodie Club
    • Hedonists
    • LA Sushi Index
    • Chinese Food Index
    • LA Peking Duck Guide
    • Eating Italy
    • Eating France
    • Eating Spain
    • Eating Türkiye
    • Eating Dutch
    • Eating Croatia
    • Eating Vietnam
    • Eating Australia
    • Eating Israel
    • Ultimate Pizza
    • ThanksGavin
    • Margarita Mix
    • Foodie Photography
    • Burgundy Vintage Chart
  • Other
    • All Posts, Magazine Style
    • Archive of all Posts
    • Fiction
    • Technology
    • History
    • Anything Else
  • Gallery
  • Bio
  • About
    • About me
    • About my Writing
    • About my Video Games
    • Ask Me Anything
  • Contact

Archive for March 2011

Better Off Dead

Mar31

Title: Better Off Dead

Director/Stars: John Cusack (Actor), Demian Slade (Actor), Savage Steve Holland (Director)

Genre: Teen Comedy

Year: 1985

Watched: March 28, 2011

Summary: Absurdist, but classic.

 

For some strange reason I’ve been on an 80s kick lately. High School nostalgia or something. Not only did I make a playlist of synthoid classics, but I started combing Amazon marketplace for cheap (like $2) used DVDs. Somehow I missed seeing all of Better Off Dead in the 26 years since release (only bits and pieces on cable), surprising given my nearly comprehensive knowledge of 80s films, and that I’m a fan of John Cusack — excepting the execrable 2012.

This is one whacky film. While it must have seemed absurdist even in 1985, now, with the added retro touch and hammy 80s overacting it’s really out there, bordering on Salvador Dali level surreal. But it is enjoyable. In a way it’s a parody of the then contemporary genre of 80s teen comedy, but it’s also a brother in arms. Nothing is taken too seriously and there are a many priceless moments. Like one of my college buddies favorite lines, “NT, big difference” (referring to the textual delta between “testicles” and “tentacles”), Lane’s mom’s cooking crawling across the table, or the goofy skiing-pole lightsaber duel near the end. But with a modern perspective, there’s the added benefit of the nostalgic and silly 80s hair, clothing, music, and even half forgotten facts like: Skiing was once cool! I remember it all too well, my first published video game was Ski Crazed!

When I saw Hot Tub Time Machine last year (another guilty pleasure) I was well aware of all the 80s movie spoof moments, but I hadn’t realized how much John Cusack was referencing Better Off Dead specifically. The plot is fairly meaningless, but basically as silly as the film is, at the core of most of the jokes are real embarrassing situations that plagued many teens — certainly in the 80s, and probably now.

I was also not aware until I looked it up that Curtis Armstrong, better known as Booger, was already in his 30s when playing these silly teen characters. Or that he has played 122 roles! The guy’s been busy for decades.

If you want to see more 80s movie reviews, I also blogged yesterday on About Last Night.

Related posts:

  1. Book Review: Dead Beautiful
By: agavin
Comments (7)
Posted in: Movies
Tagged as: Arts, Better Off Dead, Better Off Dead (film), Fiction, Film, Film Review, Hot Tub Time Machine, John Cusack, Movie, Movie Review, review, Savage Steve Holland, Ski Crazed

About Last Night

Mar30

Title: About Last Night

Director/Stars: Rob Lowe (Actor), Demi Moore (Actor), Edward Zwick (Director)

Genre: Romantic Comedy (R!)

Year: 1986

Watched: March 27, 2011

Summary: Holds up brilliantly.

 

I’ve always loved this movie. Perhaps I’m a romantic at heart. Perhaps it’s the David Mamet dialogue, or maybe Demi’s just hot. I’ve probably seen it 5-6 times, but not in the last 20 years. Although I still have the laserdisk somewhere. In any case, it’s out on blu-ray now, so being on my 80s kick I figured I’d see how it held up.

Perfectly.

The crisp blu-ray transfer helped, taking out the sometimes distracting poor color and funny old film grain legacy of old videotape transfers. But I got to remember what I liked so much about the film. And not just Demi’s nipples. First of all, there is the fact that this is an R-rated romantic comedy. How many others even exist? It’s sexy, the dialogue is raunchy and funny. Brilliant in fact. Particularly as delivered by James Belushi‘s over the top performance as the sexist best friend, or Elizabeth Perkins going toe to toe in bitchy counterpoint (made all the more amusing by having seen her in Weeds).

The most important thing about this film is the pitch perfect ebb and flow of the relationship between the two leads. It’s not the relationship everyone might have had, but it’s an accurate one. They feel like solidly real people. So in some ways, fairly unique among Romantic Comedies, there is truth here. Not every truth, but a specific one nonetheless. The film also has the audacity to cover nearly a year, and do it well, giving the rise and fall and then maybe rise again of this couple some actual weight and believability. You feel like they’ve changed and there’s been passage of time. Far too many films in the genre feel like about three dates, where the writers, not the characters, are building the relationship.

I loved the 80s outfits too. The Reboks, the sweaters and baggy shirts tied with belts, the high hip jeans. Sure they look silly, but… It’s also interesting to note the subtle culture changes that 25 years have wrought. The guy characters are allowed to be guys (and sexist) in ways that would be avoided today. I don’t really think men have changed, but Hollywood has.

Related posts:

  1. Book Review: Tropic of Night
  2. Machete – The best B-movie ever?
  3. Book and Movie Review: Harry Potter and the Deathly Hallows
  4. Truly Deeply Sick and Twisted
  5. Book and Movie Review: Let Me In
By: agavin
Comments (4)
Posted in: Movies
Tagged as: About Last Night, David Mamet, Demi Moore, Edward Zwick, Elizabeth Perkins, Fiction, Film, Film Review, James Belushi, Movie Review, reviews, Rob Lowe, Romantic comedy film, Weeds

Parlez Vu Modern?

Mar29

Restaurant: Vu [1, 2]

Location: 14160 Palaway Way, Marina del Rey, CA 90292 310.439.3033

Date: March 25, 2011

Cuisine: Modern

Rating: Very creative, worth a trip, but needs a little tuning up

ANY CHARACTER HERE

Vu is a new place in Marina Del Rey. While I lived there in 1997-1998 MDR was a bit of a culinary wasteland. It hasn’t exactly had a renaissance, but it is improving, and Vu is certainly an example of that. This place is nicely situated along the Marina with good views — sort of oddly tucked into the ground floor of some apartment or office building — and it’s got very novel and even somewhat molecular food. There’s a lot of ambition here on the menu, and I give them an A for effort. But they need to tune it up a bit to rich the heights possible with this sort of cuisine. In LA, the current molecular champ (and there aren’t many contenders) is The Bazaar, and it’s tasting room Saam. This type of cuisine originated in Spain, and you can see some native examples HERE or HERE.


The menu offers both extensive small plates (front) and a few entrees and desserts (back). As I’ve said again and again, I love the small plate format.

The bread is homemade cornbread, with jalapeño butter. I approve. As a partial southerner, I love cornbread, and this was a good example of type.

The drink menus. The wine was all California, which is hard for me, a dedicated European Wine lover.

2009 Brander Sauvignon Blanc. Despite being New World, I enjoyed it, much like a good Sancerre.

““Reconstructed” Caprese Salad, balsamic-injected cherry tomatoes, basil-infused fresh mozzarella, red hawaiian sea salt, micro basil.” This was my wife’s favorite. A tad too much tomato for me (being a tomato hater). The texture was really neat though, almost like a merengue.

“Hamachi, bbq spice, collard green fluid gel, micro cilantro.” I was a bit disappointed with this dish. Maybe it needed a slightly better grade of fish, maybe more spice, but it didn’t have as much flavor as I would have expected.

“Grilled Cheese, toasted brioche, “midnight moon”, tomato jam, micro basil.” Overall nice. The sauce was very pomodoro sauce like. The cheese was maybe a bit tangy for the combo somehow, but good.

“Peas & Carrots, carrot noodles, pea puree, carrot chips.” The textures here were really neat (particularly the chips). The overall flavor was pleasant but very subtle and muted.

“Chicken-Fried Watermelon, pickled ring.” The fry here was great fry, and since you can fry anything it more or less worked. It was a little odd or surprising to bite into fry and get watermelon, but as I said, it was great fry.

“Lobster Tail, citrus, popcorn jello, fritos.” My favorite dish hands down. The raw lobster had a really nice texture, like raw scallop, and the crunchy crisp of the fritos really went nicely. Overall it showed off both the subtle lobster flavor and had a bit of zing (from the citrus I assume).

“Lamb “Lollipops” sweet tea poached, rosemary, roasted grape relish.” My second favorite. Very nice and meaty. Not a middle eastern lamb flavor as I might expected, but very tasty nonetheless.

“Buttermilk Panna Cotta, liquid nitrogen coke-a-cola, carbonated blackberries.” This was a little bit of a disappointment. I guess it should have been sweeter to my taste. The Panna Cotta itself was a little sour, like a yogourt and/or I expected the coke-a-cola topping to be VERY sweet to counter (it was instead subtle). The blackberries I loved. They had some serious zing. Overall it was pleasant, but it could have been great.

The view is, for the most part, very nice.

The menu here at Vu is really interesting and innovative. It has a good format (with lots of small plates) and is well priced. But I think the kitchen needs to tune things up a bit. I’m not exactly sure where the issue is, but the dishes were often just nice instead of wow — and it seemed they could be wow. It might be ingredients in a couple cases (like the Hamachi), or maybe it’s just a certain amount of zest or the ratios. The flavors were often a bit muted for my taste. It’s even possible that this is just a slightly flat execution by the kitchen (and the test recipes are good). This kind of cooking needs to really balance the flavors and have the whole thing jam through. Saam is a great example of this. At our tasting menu there a couple weeks ago, nearly every dish jumped off the plate and straight into your hind-brain.

But I very much applaud the effort, far too many restaurants churn out the same boring stuff. So I’ll check back again and see how things have developed.

Check out a second review at Vu here.

Related posts:

  1. Finally, Modern Dim sum in Santa Monica
  2. Figs are in Season
  3. Food as Art – Takao
  4. Piccolo – A little Italian
  5. Takao Two
By: agavin
Comments (1)
Posted in: Food
Tagged as: Caprese, Cook, Cornbread, Dessert, Hamachi, Insalata Caprese, lamb, Lobster, Los Angeles, Marina del Rey California, Modern Cuisine, New World, Restaurant, Restaurant Review, Sancerre, side dishes, vegetarian

Crash Bandicoot – Teaching an Old Dog New Bits – part 3

Mar28

This is the twelfth of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the BEGINNING of the whole mess.

The text below is another journal article I wrote on making Crash in 1999. This is the third part, the FIRST can be found here.

_

The Crash Bandicoot Trilogy: A Practical Example

The three Crash Bandicoot games represent a clear example of the process of technology and gameplay refinement on a single platform.  Crash Bandicoot was Naughty Dog’s first game on the Sony Playstation game console, and its first fully 3D game.  With Crash Bandicoot 2: Cortex Strikes Back and Crash Bandicoot: Warped, we were able to improve the technology, and offer a slicker more detailed game experience in successively less development time.  With the exception of added support for the Analog Joystick, Dual Shock Controller, and Sony Pocketstation the hardware platforms for the three titles are identical.

Timely and reasonably orderly development of a video game title is about risk management.  Given that you have a certain amount of time to develop the title, you can only allow for a certain quantity of gameplay and technology risks during the course of development.  One of the principle ways in which successive games improve is by the reuse of these risks.  Most solutions which worked for the earlier game will work again, if desired, in the new game.  In addition, many techniques can be gleaned from other games on the same machine that have been released during the elapsed time.

In the case of sequels such as the later Crash games there is even more reduction of risk.  Most gameplay risks, as well as significant art, code, and sound can be reused.  This allows the development team to concentrate on adding new features, while at the same time retaining all the good things about the old game.  The result is that sequels are empirically better games.

Crash Bandicoot   –   how do we do character action in 3D?

Development: September 1994 – September 1996

Staff: 9 people: 3 programmers, 4 artists, 1 designer, 1 support

Premise: Do for the ultra popular platform action game genre what Virtua Fighter had done for fighting games: bring it into 3D.  Design a very likeable broad market character and place him in a fun, and fast paced action game.  Attempt to occupy the “official character” niche on the then empty Playstation market.  Remember, that by the fall of 1994 no one had yet produced an effective 3D platform action game.

Gameplay risk: how do you design and control an action character in 3D such that the feel is as natural and intuitive as in 2D?

When we first asked ourselves, “what do you get if you put Sonic the Hedgehog (or any other character action game for that matter) in 3D,” the answer that came to mind was: “a game where you always see Sonic’s Ass.”  The entire question of how to make a platform game in 3D was the single largest design risk on the project.  We spent 9 months struggling with this before there was a single fun level.  However, by the time this happened we had formulated many of the basic concepts of the Crash gameplay.

We were trying to preserve all of the good elements of classic platform games.  To us this meant really good control, faced paced action, and progressively ramping challenges.  In order to maintain a very solid control feel we opted to keep the camera relatively stable, and to orient the control axis with respect to the camera.  Basically this means that Crash moves into the screen when you push up on the joypad.  This may seem obvious, but it was not at the time, and there are many 3D games which use different (and usually inferior) schemes.

Technical risk: how do you get the Playstation CPU and GPU to draw complex organic scenes with a high degree of texture and color complexity, good sorting, and a solid high resolution look?

It took quite a while, a few clever tricks, and not a little bit of assembly writing and rewriting of the polygon engines.  One of our major realizations was that on a CD based game system with a 33mhz processor, it is favorable to pre-compute many kinds of data in non real-time on the faster workstations, and then use a lean fast game engine to deliver high performance.

Technical risk: how do the artists build and maintain roughly 1 million polygon levels with per poly and per vertex texture and color assignment?

The challenge of constructing large detailed levels turned out to be one of the biggest challenges of the whole project.  We didn’t want to duplicate the huge amount of work that has gone into making the commercial 3D modeling packages, so we chose to integrate with one of them.  We tried Softimage at first, but a number of factors caused us to switch to AliasPower Animator.  When we began the project it was not possible to load and view a one million polygon level on a 200mhz R4400 Indigo II Extreme.  We spent several months creating a system and tools by which smaller chunks of the level could be hierarchically assembled into a larger whole.

In addition, the commercial packages were not aware that anyone would desire per polygon and per vertex control over texture, color, and shading information.  They used a projective texture model preferred by the film and effects industry.  In order to maximize the limited amount of memory on the Playstation we knew we would need to have very detailed control.  So we created a suite of custom tools to aid in the assignment of surface details to Power Animator models.  Many of these features have since folded into the commercial programs, but at the time we were among the first to make use of this style of model construction.

Technical risk: how do you get a 200mhz R4400 Indigo II to process a 1 million polygon level?

For the first time in our experience, it became necessary to put some real thought into the design of the offline data processing pipeline.  When we first wrote the level processing tool it took 20 hours to run a small test case.  A crisis ensued and we were forced to both seriously optimize the performance of the tool and multithread it so that the process could be distributed across a number of workstations.

Conventional wisdom says that game tools are child’s play.  Historically speaking, this is a fair judgment — 2D games almost never involve either sophisticated preprocessing or huge data sets.  But now that game consoles house dedicated polygon rendering hardware, the kid gloves are off.

In Crash Bandicoot players explore levels composed of over a million polygons.  Quick and dirty techniques that work for smaller data sets (e.g., repeated linear searches instead of binary searches or hash table lookups) no longer suffice.  Data structures now matter — choosing one that doesn’t scale well as the problem size increases leads to level processing tasks that take hours instead of seconds.

The problems have gotten correspondingly harder, too.  Building an optimal BSP tree, finding ideal polygon strips, determining the best way to pack data into fixed-size pages for CD streaming — these are all tough problems by any metric, academic or practical.

To make matters worse, game tools undergo constant revision as the run-time engine evolves towards the bleeding edge of available technology.  Unlike many jobs, where programmers write functional units according to a rigid a priori specification, games begin with a vague “what-if” technical spec — one that inevitably changes as the team learns how to best exploit the target machine for graphics and gameplay.

The Crash tools became a test bed for developing techniques for large database management, parallel execution, data flexibility, and complicated compression and bin packing techniques.

Art / Technical risk: how do you make low poly 3D characters that don’t look like the “Money for Nothing” video?

From the beginning, the Crash art design was very cartoon in style.  We wanted to back up our organic stylized environments with highly animated cartoon characters that looked 3D, but not polygonal.  By using a single skinned polygonal mesh model similar to the kind used in cutting edge special effects shots (except with a lot less polygons),  we were able to create a three dimensional cartoon look.  Unlike the traditional “chain of sausages” style of modeling, the single skin allows interesting “squash and stretch” style animation like that in traditional cartoons.

By very careful hand modeling, and judicious use of both textured and shaded polygons, we were able to keep these models within reasonable polygon limits.  In addition, it was our belief that because Crash was the most important thing in the game, he deserved a substantial percentage of the game’s resources.  Our animation system allows Crash to have unique facial expressions for each animation, helping to convey his personality.

Technical risk: how do you fit a million polygons, tons of textures, thousands of frames of animation, and lots of creatures into a couple megs of memory?

Perhaps the single largest technical risk of the entire project was the memory issue.  Although there was a plan from the beginning, this issue was not tackled until February of 1996.  At this point we had over 20 levels in various stages of completion, all of which consumed between 2 and 5 megabytes of memory.  They had to fit into about 1.2 megabytes of active area.

At the beginning of the project we had decided that the CD was the system resource least likely to be fully utilized, and that system memory (of various sorts) was going to be one of the greatest constraints.  We planned to trade CD bandwidth and space for increased level size.

The Crash series employs an extremely complicated virtual memory scheme which dynamically swaps into memory any kind of game component: geometry, animation, texture, code, sound, collision data, camera data, etc.  A workstation based tool called NPT implements an expert system for laying out the disk.  This tool belongs to the class of formal Artificially Intelligence programs.  Its job is to figure out how the 500 to 1000 resources that make up a Crash level can be arranged so as to never have more than 1.2 megabytes needed in memory at any time.  A multithreaded virtual memory implementation follows the instructions produced by the tool in order to achieve this effect at run time.  Together they manage and optimize the essential resources of main, texture, and sound RAM based on a larger CD based database.

Technical/Design risk: what to do with the camera?

With the 32 bit generation of games, cameras have become a first class character in any 3D game.  However, we did not realize this until considerably into the project.  Crash represents our first tentative stab at how to do an aesthetic job of controlling the camera without detracting from gameplay.  Although it was rewritten perhaps five times during the project, the final camera is fairly straightforward from the perspective of the user.  None of the crop of 1995 and 1996 3D action games played very well until Mario 64 and Crash.  These two games, while very different, were released within two months of each other, and we were essentially finished with Crash when we first saw Mario.  Earlier games had featured some inducement of motion sickness and a difficulty for the players in quickly judging the layout of the scene.  In order to enhance the tight, high impact feel of Crash’s gameplay, we were fairly conservative with the camera.  As a result Crash retains the quick action feel of the traditional 2D platform game more faithfully than other 3D games.

Technical risk: how do you make a character collide in a reasonable fashion with an arbitrary 3D world… at 30 frames a second?

Another of the games more difficult challenges was in the area of collision detection.  From the beginning we believed this would be difficult, and indeed it was.  For action games, collision is a critical part of the overall feel of the game.  Since the player is looking down on a character in the 3rd person he is intimately aware when the collision does not react reasonably.

Crash can often be within a meter or two of several hundred polygons.  This means that the game has to store and process a great deal of data in order to calculate the collision reactions.  We had to comb through the computer science literature for innovative new ways of compressing and storing this database.  One of our programmers spent better than six months on the collision detection part of the game, writing and rewriting the entire system half a dozen times.  Finally, with some very clever ideas, and a lot of hacks, it ended up working reasonably well.

Technical risk: how do you program, coordinate, and maintain the code for several hundred different game objects?

Object control code, which the gaming world euphemistically calls AI, typically runs only a couple of times per frame. For this kind of code, speed of implementation, flexibility, and ease of later modification are the most important requirements.  This is because games are all about gameplay, and good gameplay only comes from constant experimentation with and extensive reworking of the code that controls the game’s objects.

The constructs and abstractions of standard programming languages are not well suited to object authoring, particularly when it comes to flow of control and state.  For Crash Bandicoot we implemented GOOL (Game Oriented Object LISP), a compiled language designed specifically for object control code that addresses the limitations of traditional languages.

Having a custom language whose primitives and constructs both lend them selves to the general task (object programming), and are customizable to the specific task (a particular object) makes it much easier to write clean descriptive code very quickly.  GOOL makes it possible to prototype a new creature or object in as little as 10 minutes.  New things can be tried and quickly elaborated or discarded. If the object doesn’t work out it can be pulled from the game in seconds without leaving any hard to find and wasteful traces behind in the source.  In addition, since GOOL is a compiled language produced by an advanced register coloring compiler with reductions, flow analysis, and simple continuations it is at least as efficient as C, more so in many cases because of its more specific knowledge of the task at hand.  The use of a custom compiler allowed us to escape many of the classic problems of C.

Crash Bandicoot 2: Cortex Strikes Back  –   Bigger and Badder!

Development: October 1996 – November 1997

Staff: 14 people: 4 programmers, 6 artists, 1 designer, 3 support

Premise: Make a sequel to the best selling Crash Bandicoot that delivered on all the good elements of the first game, as well as correcting many of our mistakes.  Increasing the technical muscle of the game, and improving upon the gameplay, all without looking “been there done that…” in one year.

For Crash 2 we rewrote approximately 80% of the game engine and tool code.  We did so module by module in order to allow continuous development of game levels.  Having learned during Crash 1 about what we really needed out of each module we proceeded to rewrite them rapidly so that they offered greater speed and flexibility.

Technical risk: A fancy new tools pipeline designed to deal with a constantly changing game engine?

The workstation based tools pipeline was a crucial part of Crash 1.  However, at the time of its original conception, it was not clear that this was going to be the case.  The new Crash 2 tools pipe was built around a consistent database structure designed to allow the evolution of level databases, automatic I/O for complex data types, data browsing and searching, and a number of other features.  The pipe was modularized and various built-in restrictions were removed.  The new pipe was able to support the easy addition of arbitrary new types of data and information to various objects without outdating old information.

We could never have designed such a clean tool program that would be able to handle the changes and additions of Crash 2 and Warped at the beginning of the first game.  Being aware of what was needed at the start of the rewrite allowed us to design a general infrastructure that could support all of the features we had in mind.  This infrastructure was then flexible enough to support the new features added to both sequels.

Technical/process risk: The process of making and refining levels took too long during the first game.  Can we improve it?

The most significant bottleneck in making Crash 1 was the overall time it took to build and tune a level.  So for Crash 2 we took a serious look at this process and attempted to improve it.

For the artists, the task of surfacing polygons (applying texture and color) was very time consuming.  Therefore, we made improvements to our surfacing tools.

For both the artists and designers, the specification of different resources in the level was exceedingly tedious.  So we added a number of modules to the tools pipeline designed to automatically balance and distribute many of these resources, as well as to auto calculate the active ranges of objects and other resources that had to be controlled manually in the first game.  In addition, we moved the specification of camera, camera info, game objects, and game object info into new text based configuration files.  These files allowed programmers and designers to edit and add information more easily, and it also allowed the programmers to add new kinds of information quickly and easily.

The result of this process was not really that levels took any less time to make, but that the complexity allowed was several times that of the first game.  Crash 2 levels are about twice as large, have integrated bonus levels, multiple branches, “hard paths,” and three or four times as many creatures, each with an order of magnitude more settable parameters.  The overall turn around time for changing tunable level information was brought down significantly.

Technical/Design risk: can we make a better more flexible camera?

The camera was one of the things in Crash 1 with which we were least satisfied.  So in order to open up the game and make it feel more lifelike, we allowed the camera to look around much more, and supported a much wider set of branching and transition cameras.  In addition, arbitrary parameterized information was added to the camera system so that at any location the camera had more than 100 possible settable options.

If the two games are compared side by side, it can be seen that the overall layouts of Crash 2 levels are much larger and more complicated.  The camera is more natural and fluid, and there are numerous dynamic camera transitions and effects which were not present in the first game.  Even though the Crash 2 camera was written entirely from scratch, the lessons learned during the course of Crash 1 allowed it to be more sophisticated and aggressive, and it executed faster than its predecessor.

Optimization risk: can we put more on screen?

Crash 1 was one of the fastest games of its generation, delivering high detail images at 30 frames per second.  Nevertheless, for Crash 2 we wanted to put twice as much on screen, yet still maintain that frame-rate.  In order to achieve this goal we had one programmer doing nothing but re-coding areas of the engine into better assembly for the entire length of the project.  Dramatically increasing performance does not just mean moving instructions around; it is a complex and involved process.  First we study the performance of all relevant areas of the hardware in a scientific and systematic fashion.  Profiles are made of cache latencies, coprocessor parallel processing constraints, etc.  Game data structures are then carefully rearranged to aid the engine in loading and processing them in the most efficient way.  Complicated compression and caching schemes are explored to both reduce storage size (often linked to performance due to bus bandwidth) and to speed up the code.

Simultaneously we modularized the game engine to add more flexibility and features.  Crash 2 has more effects, such as Z-buffer-like water effects, weather, reflections, particles, talking hologram heads, etc.  Many annoying limitations of the Crash 1 drawing pipeline were removed, and most importantly, the overall speed was increased by more than two-fold.

In order to further improve performance and allow more simultaneous creatures on screen, we re-coded the GOOL interpreter into assembly, and also modified the compiler to produce native MIPS assembly for even better performance.

Technical risk: if we can put more on screen, can we fit it in memory?

We firmly believe that all three Crash games make use of the CD in a more aggressive fashion than most Playstation games.  So in order to fit the even larger Crash 2 levels into memory (often up to 12 megabytes a level) we had to increase the efficiency of the virtual memory scheme even more.  To do so we rewrote the AI that lays out the CD, employing several new algorithms.  Since different levels need different solutions we created a system by which the program could automatically try different approaches with different parameters, and then pick the best one.

In addition, since Crash 2 has about 8 times the animation of the first game, we needed to really reduce the size of the data without sacrificing the quality of the animation.  After numerous rewrites the animation was stored as a special bitstream compressed in all 4 dimensions.

Design risk: can we deliver a gameplay experience that is more than just “additional levels of Crash?”

We believe that game sequels are more than an opportunity to just go “back to the bank.”  For both of the Crash sequels we tried to give the player a new game, that while very much in the same style, was empirically a bigger, better game.  So with the increased capacity of the new Crash 2 engine we attempted to build larger more interesting levels with a greater variety of gameplay, and a more even and carefully constructed level of difficulty progression.  Crash 2 has about twice as many creatures as Crash 1, and their behaviors are significantly more sophisticated.  For example, instead of just putting the original “turtle” back into the game, we added two new and improved turtles, which had all the attributes of the Crash 1 turtle, but also had some additional differences and features.  In this manner we tried to build on the work from the first game.

Crash himself remains the best example.  In the second game Crash retains all of the moves from the first, but gains a number of interesting additional moves: crawling, ducking, sliding, belly flopping, plus dozens of custom coded animated death sequences.  Additionally, Crash has a number of new control specs: ice, surfboard, jet-pack, baby bear riding, underground digging, and hanging.  These mechanics provide entirely new game machines to help increase the variety and fun factor of the game.  It would be very difficult to include all of these in a first generation game because so much time is spent refining the basic mechanic.

Technically, these additions and enhancements were aided by the new more flexible information specification of the new tools pipeline, and by additions to the GOOL programming language based on lessons learned from the first game.

Crash Bandicoot: Warped!  –   Every trick in the book!

Development: January 1998 – November 1998

Staff: 15 people: 3 programmers, 7 artists, 3 designers, 2 support

Premise: With only 9 months in which to finish by Christmas, we gave ourselves the challenge of making a third Crash game which would be even cooler and more fun than the previous one.  We chose a new time travel theme and wanted to differentiate the graphic look and really increase the amount and variety of gameplay.  This included power-ups, better bosses, lots of new control mechanics, an open look, and multiple playable characters.

Technical/Process risk: the tight deadline and a smaller programming staff required us to explore options for even greater efficiency.

The Crash Warped production schedule required that we complete a level every week.  This was nearly twice the rate typical of Crash levels.  In addition, many of the new levels for Warped required new engines or sub-engines designed to give them a more free-roaming 3D style.  In order to facilitate this process we wrote an interactive listener which allowed GOOL based game objects to be dynamically examined, debugged, and tuned.  We were then able to set the parameters and features of objects in real-time, greatly improving our ability to tune and debug levels.  Various other visual debugging and diagnostic techniques were also introduced as well.

Knowledge from the previous game allowed us to further pipeline various processes.  The Crash series is heavily localized for different territories.  The European version supports five languages, text and speech, including lip sync.  In addition, it was entirely re-timed, and the animation was resampled for 25hz.  The Japanese version has Pocketstation support, a complete language translation, and a number of additional country specific features.  We were able to build in the features needed to make this happen as we wrote the US version of the game.  The GOOL language was expanded to allow near automatic conversion of character control timing for PAL.

Technical/Art risk: could the trademark look of the Crash series be opened up to offer long distance views and to deliver levels with free-roaming style gameplay?

In order to further differentiate the third Crash game, we modified the engine to support long distance views and Level of Detail (LOD) features.  Crash Warped has a much more open look than the previous games, with views up to ten times as far.  The background polygon resource manager needed some serious reworking in order to handle this kind of increased polygon load, as did the AI memory manager.  We developed the new LOD system to help manage these distance views.  These kinds of system complexities would not have been feasible in a first generation game, since when we started Crash 1, the concept of LOD in games was almost completely undeveloped, and just getting a general engine working was enough of a technical hurdle.

Similarly, the stability of the main engine allowed us to concentrate more programmer time on creating and polishing the new sub-engines:  jet-ski, motorcycle, and biplane.

Gameplay risk: could we make the gameplay in the new Crash significantly different from the previous ones and yet maintain the good elements of the first two games?

The new free-roaming style levels presented a great gameplay challenge.  We felt it necessary to maintain the fast-paced, forward driven Crash style of gameplay even in this new context.  The jet-ski in particular represented a new kind of level that was not present in the first two games.  It is part race game, part vehicle game, and part regular Crash level.  By combining familiar elements like the boxes and creatures with the new mechanics, we could add to the gameplay variety without sacrificing the consistency of the game.

In addition to jet-ski, biplane, and motorcycle levels, we also added a number of other new mechanics (swimming, bazooka, baby T-rex, etc.) and brought back most of Crash 2’s extensive control set.  We tried to give each level one or more special hooks by adding gameplay and effect features.  Warped has nearly twice as many different creatures and gameplay modes as Crash 2.  The third game clocked in at 122,000 lines of GOOL object control code, as compared to 68,000 for the second game and 49,000 for the first!  The stability of the basic system and the proven technical structure allowed the programmers to concentrate on gameplay features, packing more fun into the game.  This was only possible because on a fixed hardware like the Playstation, we were fairly confident that the Warped engine was reasonably optimal for the Crash style of game.  Had we been making the game for a moving target such as the PC, we would have been forced to spend significant time updating to match the new target, and would have not been able to focus on gameplay.

Furthermore, we had time, even with such a tight schedule, to add more game longevity features.  The Japanese version of Warped has Pocketstation support.  We improved the quality of the boss characters significantly, improved the tuning of the game, added power-ups that can be taken back to previously played levels, and added a cool new time trial mode.  Crash games have always had two modes of play for each level: completion (represented by crystals) and box completion (represented by gems).  In Warped we added the time trial mode (represented by relics).  This innovative new gameplay mode allows players to compete against themselves, each other, and preset goals in the area of timed level completion.  Because of this each level has much more replay value and it takes more than twice as long to complete Warped with 100% as it does Crash 2.

Technical risk: more more more!

As usual, we felt the need to add lots more to the new game.  Since most of Crash 2’s animations were still appropriate, we concentrated on adding new ones.  Warped has a unique animated death for nearly every way in which Crash can loose a life.  It has several times again the animation of the second game.  In addition, we added new effects like the arbitrary water surface, and large scale water effects.  Every character, including Crash got a fancy new shadow that mirrors the animated shape of the character.

All these additions forced us to squeeze even harder to get the levels into memory.  Additional code overlays, redundant code mergers, and the sacrifice of thirteen polka dotted goats to the level compression AI were necessary.

Conclusions

In conclusion, the consistency of the console hardware platform over its lifetime allows the developer an opportunity to successively improve his or her code, taking advantage of techniques and knowledge learned by themselves and others.  With each additional game the amount of basic infrastructure programming that must be done is reduced, and so more energy can be put into other pursuits, such as graphical and gameplay refinements.

If you liked this post, follow me at:

My novels: The Darkening Dream and Untimed
or the
video game post depot
or win Crash & Jak giveaways!

Latest hot post: War Stories: Crash Bandicoot

Related posts:

  1. Crash Bandicoot – Teaching an Old Dog New Bits – part 2
  2. Crash Bandicoot – Teaching an Old Dog New Bits – part 1
  3. Crash Bandicoot – An Outsider’s Perspective (part 8)
  4. Making Crash Bandicoot – part 1
  5. Crash Bandicoot as a Startup (part 7)
By: agavin
Comments (181)
Posted in: Games, Technology
Tagged as: Andy Gavin, Central processing unit, Console Games, Crash Bandicoot, Crash Bandicoot 2: Cortex Strikes Back, Crash Bandicoot 3: Warped, game, Jak and Daxter, Naughty Dog, Platform game, Playstation, PowerAnimator, pt_crash_history, Video game console, Video Games

Crash Bandicoot – Teaching an Old Dog New Bits – part 2

Mar27

This is the eleventh of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the BEGINNING of the whole mess.

The text below is another journal article I wrote on making Crash in 1999. This is the second part, the FIRST can be found here.

 

And finally to the point!

Both the rapid lifecycle of a video game console and the consistency of the hardware promote video game development strategies that are often very different from the strategies used to make PC video games.   A side-effect of these strategies and the console development environment is that video games released later in the life of a console tend to be incrementally more impressive than earlier titles, even though the hardware hasn’t changed.  Theoretically, since the hardware doesn’t change, first generation software can be as equally impressive as later generation titles, but in reality this is seldom the case.  It may seem obvious that a developer should try to make a first generation title as impressive as a last generation title, but actually this strategy has been the downfall of many talented developers.  There are many good and valid reasons why software improves over time, and the understanding and strategizing about these reasons can greatly improve the chances for a developer to be successful in the marketplace.

Difficulties of Console Video Game Development

There are many difficulties that are encountered when developing a console video game, but the following is a list of several major issues:

  • Learning curve
  • Hardware availability and reliability
  • Bottlenecks
  • Operating System / Libraries
  • Development tools
  • In-house tools
  • Reuse of code
  • Optimization

Learning curve

The learning curve may be the most obvious of all difficulties, and is often one of the most disruptive elements of a video game’s development schedule.  In the past, video games were often developed by small groups of one or more people, had small budgets, ran in a small amount of memory, and had short schedules.  The graphics were almost always 2D, and the mathematics of the game were rarely more than simple algebra.  Today, video games have become much more complicated, and often require extremely sophisticated algorithms and mathematics.  Also, the pure size of the data within a game has made both the run-time code and the tool pipeline require extremely sophisticated solutions for data management issues.  Furthermore, 3D mathematics and renderings can be very CPU intensive, so new tricks and techniques are constantly being created.   Also, the developer will often have to use complex commercial tools, such as 3D modeling packages, to generate the game’s graphics and data.  Add into this the fact that Operating Systems, API’s, and hardware components are continually changing, and it should be obvious that just staying current with the latest technology requires an incredible amount of time, and can have a big impact on the schedule of a game.

The console video game developer has the additional burden that, unlike the PC where the hardware evolves more on a component or API level, new console hardware is normally drastically different and more powerful than the preceding hardware.  The console developer has to learn many new things, such as new CPU’s, new operating systems, new libraries, new graphics devices, new audio devices, new peripherals, new storage devices, new DMA techniques, new co-processors, as well as various other hardware components.  Also, the console developer usually has to learn a new development environment, including a new C compiler, a new assembler, a new debugger, and slew of new support tools.  To complicate matters, new consoles normally have many bugs in such things as the hardware, the operating system, the software libraries, and in the various components of the development environment.

The learning curve of the console hardware is logarithmic in that it is very steep at first, but tends to drop off dramatically by the end of the console life-span.  This initial steep learning curve is why often the first generation software isn’t usually as good as later software.

Hardware availability and reliability

Hardware isn’t very useful without software, and software takes a long time to develop, so it is important to hardware developers to try to encourage software developers to begin software development well in advance of the launch date of the hardware.  It is not uncommon for developers to begin working on a title even before the hardware development kits are available.  To do this, developers will start working on things that don’t depend on the hardware, such as some common tools, and they may also resort to emulating the hardware through software emulation.  Obviously, this technique is not likely to produce software that maximizes the performance of the hardware, but it is done nevertheless because of the time constraints of finishing a product as close as possible to the launch of the console into the market.  The finished first generation game’s performance is not going to be as good as later generations of games, but this compromise is deemed acceptable in order to achieve the desired schedule.

When the hardware does become available for developers, it is usually only available in limited quantity, is normally very expensive, and eventually ends up being replaced by cheaper and more reliable versions of the hardware at some later time.  Early revisions of the hardware may not be fully functional, or may have components that run at a reduced speed, so are difficult to fully assess, and are quite scarce since the hardware developer doesn’t want to make very many of them.  Even when more dependable hardware development kits becomes available, they are usually difficult to get, since production of these kits is slow and expensive, so quantities are low, and software developers are in competition to get them.

The development kits, especially the initial hardware, tend to have bugs that have to be worked around or avoided.  Also, the hardware tends to have contact connection problems so that it is susceptible to vibrations, oxidation, and overheating.  These problems generally improve with new revisions of the development hardware.

All of these reasons will contribute to both a significant initial learning curve, and a physical bottleneck of having an insufficient number of development kits.   This will have a negative impact on a game’s schedule, and the quality of first generation software often suffers as a consequence.

Bottlenecks

An extremely important aspect to console game development is the analysis of the console’s bottlenecks, strengths, weaknesses, and overall performance.  This is critical for developing high performance games, since each component of the console has a fixed theoretical maximum performance, and undershooting that performance may cause your game to appear under-powered, while overshooting may cause you to have to do major reworking of the game’s programming and/or design.  Also, overshooting performance may cause the game to run at an undesirable frame rate, which could compromise the look and feel of the game.

The clever developer will try to design the game to exploit the strengths of the machine, and circumvent the weaknesses.  To do this, the developer must be as familiar as possible with the limitations of the machine.  First, the developer will look at the schematic of the hardware to find out the documented sizes, speeds, connections, caches, and transfer rates of the hardware.  Next, the developer should do hands-on analysis of the machine to look for common weaknesses, such as:  slow CPU’s, limited main memory, limited video memory, limited sound memory, slow BUS speeds, slow RAM access, small data caches, small instruction caches, small texture caches, slow storage devices, slow 3D math support, slow interrupt handling, slow game controller reading, slow system routines, and slow polygon rendering speeds.  Some of these things are easy to analyze, such as the size of video memory, but some of these things are much trickier, such as polygon rendering speeds, because the speed will vary based on many factors, such as source size, destination size, texture bit depth, caching, translucency, and z-buffering, to name just a few.  The developer will need to write several pieces of test code to study the performance of the various hardware components, and should not necessarily trust the statistics found in the documentation, since these are often wrong or misleading.

A developer should use a profiler to analyze where speed losses are occurring in the run-time code.  Most programmers will spend time optimizing code because the programmer suspects that code is slow, but doesn’t have any empirical proof.  This lack of empirical data means that the programmer will invariable waste a lot of time optimizing things that don’t really need to be optimized, and will not optimize things that would have greatly benefited from optimization. Unfortunately, a decent profiler is almost never included in the development software, so it is usually up to the individual developer to write his own profiling software.

The testing of performance is an extremely important tool to use in order to maximize performance.  Often the reason why software improves between generations is that the developers slowly learn over time how to fully understand the bottlenecks, how to circumvent the bottlenecks, and how to identify what actually constitutes a bottleneck.

Operating system / Libraries

Although the consoles tend to have very small operating systems and libraries when compared to the operating systems found on the PC, they are still an important factor of console video game development.

Operating systems and support libraries on video game consoles are used to fill many needs.  One such need is that the hardware developer will often attempt to save money on the production of console hardware by switching to cheaper components, or by integrating various components together.  It is up to the operating system to enable these changes, while having the effects of these changes be transparent to both the consumer and the developer.  The more that the operating system abstracts the hardware, the easier it is for the hardware developer to make changes to the hardware.  However, remember that this abstraction of the hardware comes at the price of reduced potential performance.  Also, the operating system and support libraries will commonly provide code for using the various components of the console.  This has the advantage that developers don’t have to know the low-level details of the hardware, and also potentially saves time since different developers won’t have to spend time creating their own versions of these libraries.  The advantage of not having to write this low level code is important in early generation projects, because the learning curve for the hardware is already quite high, and there may not be time in the schedule for doing very much of this kind of low-level optimization.  Clever developers will slowly replace the system libraries over time, especially with the speed critical subroutines, such as 3D vector math and polygonal set-up.  Also, the hardware developer will occasionally improve upon poorly written libraries, so even the less clever developers will eventually benefit from these optimizations. Improvements to the system libraries are a big reason why later generation games can increase dramatically in performance.

Development tools

On the PC, development tools have evolved over the years, and have become quite sophisticated.  Commercial companies have focused years of efforts on making powerful, optimal, polished, and easy to use development tools.  In contrast, the development tools provided for console video game development are generally provided by the hardware manufacturer, and are usually poorly constructed, have many bugs, are difficult to use, and do not produce optimal results.  For example, the C compiler usually doesn’t optimize very well; the debugger is often crude and, ironically, has many bugs; and there usually isn’t a decent software profiler.

Initially developers will rely on these tools, and the first few generations of software will be adversely effected by their poor quality.  Over time, clever programmers will become less reliant on the tools that are provided, or will develop techniques to work around the weaknesses of the tools.

In-house tools

In-house tools are one of the most important aspects of producing high performance console video game software.  Efficient tools have always been important, but as the data content in video games has grown exponentially over the last few years, in-house tools have become increasingly more important to the overall development process.  In the not too distant future, the focus on tool programming techniques may even exceed the focus on run-time programming issues.  It is not unreasonable that the most impressive video games in the future may end up being the ones that have the best support tools.

In-house tools tend to evolve to fill the needs of a desired level of technology.  Since new consoles tend to have dramatic changes in technology over the predecessor consoles, in-house tools often have to be drastically rewritten or completely replaced to support the new level of technology.  For example, a predecessor console may not have had any 3D support, so the tools developed for that console most likely would not have been written to support 3D.  When a new console is released that can draw 100,000 polygons per second, then it is generally inefficient to try to graft support for this new technology onto the existing tools, so the original tools are discarded.  To continue the previous example, let’s say that the new tool needs to be able to handle environments in the game that average about 500,000 polygons, and have a maximum worst case of 1 million polygons.  Most likely the tool will evolve to the point where it runs pretty well for environments of the average case, but will most likely run just fast enough that the slowest case of a 1 million polygons is processed in a tolerable, albeit painful, amount of time.  The reasons for this are that tools tend to grow in size and complexity over time, and tools tend to only be optimized to the point that they are not so slow as to be intolerable.  Now let’s say that a newer console is released that can now drawn 1 million polygons a second, and now our worst case environment is a whopping 1 billion polygons!  Although the previous in-house tool could support a lot of polygons, the tool will still end up being either extensively rewritten or discarded, since the tool will not be able to be easily modified to be efficient enough to deal with this much larger amount of polygons.

The ability of a tool to function efficiently as the data content processed by the tool increases is referred to as the ability of the tool to “scale”.  In video game programming, tools are seldom written to scale much beyond the needs of the current technology; therefore, when technology changes dramatically, old tools are commonly discarded, and new tools have to be developed.

The in-house tools can consume a large amount of the programming time of a first generation title, since not only are the tools complicated, but they evolve over time as the run-time game code is implemented.  Initial generations of games are created using initial generations of tools.  Likewise, later generations of games are created using later generations of tools.  As the tools become more flexible and powerful, the developer gains the ability to create more impressive games.  This is a big reason why successive generations of console games often make dramatic improvements in performance and quality over their predecessors.

Reuse of code

A problem that stems from the giant gaps in technology between console generations is that it makes it difficult to reuse code that was written for a previous generation of console hardware.  Assembly programming is especially difficult to reuse since the CPU usually changes between consoles, but the C programming language isn’t much of a solution either, since the biggest problem is that the hardware configurations and capabilities are so different.  Any code dealing directly with the hardware or hardware influenced data structures will have to be discarded.  Even code that does something universal in nature, such as mathematical calculations, will most likely need to be rewritten since the new hardware will most likely have some sort of different mathematical model.

Also, just as the in-house tool code becomes outdated, so does game code that is written for less powerful technology.  Animation, modeling, character, environment, and particle code will all need to be discarded.

In practice, very little code can be reused between technological leaps in hardware platforms.  This means that earlier generation games will not have much code reuse, but each new generation of games for a console will be able to reuse code from its predecessors, and therefore games will tend to improve with each new generation.

Optimization

By definition, having optimal code is preferable to having bulky or less efficient code.  It would therefore seem logical to say that to achieve maximum performance from the hardware, all code should be completely optimal.  Unfortunately, this is not an easy or even practical thing to achieve, since the writing of completely optimal code has many nuances, and can be very time-consuming.  The programmer must be intimately familiar with the details of the hardware.  He must fully understand how to implement the code, such as possibly using assembly language since C compilers will often generate inefficient code.  The programmer must make certain to best utilize the CPU caches.  Also, the programmer should understand how the code may effect other pieces of code, such as the effects of the code on the instruction cache, or the amount of resources that are tied up by his code. The programmer has to know how to effectively use co-processors or other devices.  He must develop an algorithm that is maximally efficient when implemented. Also, the programmer will need to measure the code against the theoretical maximum optimal performance to be certain that the code can indeed be considered to be fully optimal.

Writing even highly optimized code for specific hardware is time-consuming, and requires a detailed knowledge of both the hardware and the algorithm to be optimized.  It is therefore commonly impractical to attempt to highly optimize even a majority of the  code.  This is especially true when writing a first generation game, since the developer is not familiar enough with the intricacies of the hardware to be very productive at writing optimal code.  Instead, it is more productive to only spend time optimizing the code that most profoundly effects the efficiency of the overall game.  Unfortunately, the identifying of what code should be optimized can also be a difficult task.  As a general rule, the code to be optimized is often the code that is executed most frequently, but this is not always the case.  Performance analyzing, testing, and profiling can help identify inefficient code, but these are also not perfect solutions, and the experience of the programmer becomes an important factor in making smart decisions concerning what code should be optimized.

As a programmer gets more familiar with the intricacies of the hardware, he will be able to perform a greater amount of optimizations.  Also, when developing later generation games, the programmer will often be able to reuse previously written optimized code.  Plus, there is often more time in the schedule of later generation titles in which to perform optimizations.  This accumulation of optimal code is a big reason why games often improve in performance in successive generations.

Other Considerations

There are many other reasons to explain the improvement in performance of next generation software that are not directly related to programming for a video game console.  For example, developers will often copy or improve upon the accomplishments of other developers.  Likewise, developers will avoid the mistakes made by others.  Also, developers acquire and lose employees fairly frequently, which creates a lot of cross-pollination of ideas and techniques between the various development houses.  These and many other reasons are important, but since they are not specific to console video game development, they have not been specifically discussed.

CLICK HERE to CONTINUE to PART 3.

If you liked this post, follow me at:

My novels: The Darkening Dream and Untimed
or the
video game post depot
or win Crash & Jak giveaways!

Latest hot post: War Stories: Crash Bandicoot

Related posts:

  1. Crash Bandicoot – Teaching an Old Dog New Bits – part 1
  2. Crash Bandicoot – An Outsider’s Perspective (part 8)
  3. Making Crash Bandicoot – part 5
  4. Making Crash Bandicoot – part 4
  5. Making Crash Bandicoot – part 3
By: agavin
Comments (9)
Posted in: Games, Technology
Tagged as: Andy Gavin, Application programming interface, Central processing unit, Console game, Crash Bandicoot, Crash Bandicoot 2: Cortex Strikes Back, game, Jason Rubin, Naughty Dog, Operating system, Playstation, Program optimization, Programming tool, pt_crash_history, Video game, Video game console

Crash Bandicoot – Teaching an Old Dog New Bits – part 1

Mar26

This is loosely part of a now lengthy series of posts on the making of Crash Bandicoot. Click here for the PREVIOUS or for the FIRST POST .

Below is another journal article I wrote on making Crash in 1999. This was co-written with Naughty Dog uber-programmer Stephen White, who was my co-lead on Crash 2, Crash 3, Jak & Daxter, and Jak 2. It’s long, so I’m breaking it into three parts.

Teaching an Old Dog New Bits

How Console Developers are Able to Improve Performance When the Hardware Hasn’t Changed

by

Andrew S. Gavin

and

Stephen White

Copyright © 1994-99 Andrew Gavin, Stephen White, and Naughty Dog, Inc. All rights reserved.

Console vs. Computer

Personal computers and video game consoles have both made tremendous strides in graphics and audio performance; however, despite these similarities there is a tremendous benefit in understanding some important differences between these two platforms.

Evolution is a good thing, right?

The ability to evolve is the cornerstone behind the long-term success of the IBM PC.  Tremendous effort has been taken on the PC so that individual components of the hardware could be replaced as they become inefficient or obsolete, while still maintaining compatibility with existing software.  This modularity of the various PC components allows the user to custom build a PC to fit specific needs.  While this is a big advantage in general, this flexibility can be a tremendous disadvantage for developing video games.  It is the lack of evolution; the virtual immutability of the console hardware that is the greatest advantage to developing high quality, easy to use video game software.

You can choose any flavor, as long as it’s vanilla

The price of the PC’s evolutionary ability comes at the cost of dealing with incompatibility issues through customized drivers and standardization.  In the past, it was up to the video game developer to try to write custom code to support as many of the PC configurations as possible.  This was a time consuming and expensive process, and regardless of how thorough the developer tried to be, there were always some PC configurations that still had compatibility problems.  With the popularity of Microsoft’s window based operating systems, video game developers have been given the more palatable option of allowing other companies to develop the drivers and deal with the bulk of the incompatibility issues; however, this is hardly a panacea, since this necessitates a reliance on “unknown” and difficult to benchmark code, as well as API’s that are designed more for compatibility than optimal performance.  The inherit cost of compatibility is compromise.  The API code must compromise to support the largest amount of hardware configurations, and likewise, hardware manufacturers make compromises in their hardware design in order to adapt well to the current standards of the API.  Also, both the API and the hardware manufacturers have to compromise because of the physical limitations of the PC’s hardware itself, such as bus speed issues.

Who’s in charge here?

The operating system of a PC is quite large and complicated, and is designed to be a powerful and extensively featured multi-tasking environment.  In order to support a wide variety of software applications over a wide range of computer configurations, the operating system is designed as a series of layers that distance the software application from the hardware.  These layers of abstraction are useful for allowing a software application to function without concerning itself with the specifics of the hardware.  This is an exceptionally useful way of maintaining compatibility between hardware and software, but is unfortunately not very efficient with respect to performance.  The hardware of a computer is simply a set of interconnected electronic devices.  To theoretically maximize the performance of a computer’s hardware, the software application should write directly to the computer’s hardware, and should not share the resources of the hardware, including the CPU, with any other applications.  This would maximize the performance of a video game, but would be in direct conflict with the implementations of today’s modern PC operating systems.  Even if the operating system could be circumvented, it would then fall upon the video game to be able to support the enormous variety of hardware devices and possible configurations, and would therefore be impractical.

It looked much better on my friend’s PC

Another problem with having a large variety of hardware is that the video game developer cannot reliably predict a user’s personal set-up.  This lack of information means that a game can not be easily tailored to exploit the strengths and circumvent the weaknesses of a particular system.  For example, if all PC’s had hard-drives that were all equally very fast, then a game could be created that relied on having a fast hard-drive.  Similarly, if all PC’s had equally slow hard-drives, but had a lot of memory, then a game could compensate for the lack of hard-drive speed through various techniques, such as caching data in RAM or pre-loading data into RAM.  Likewise, if all PC’s had fast hard-drives, and not much memory, then the hard-drive could compensate for the lack of much memory by keeping most of the game on the hard-drive, and only spooling in data as needed.

Another good example is the difference between polygon rendering capabilities.  There is an enormous variation in both performance and effects between hardware assisted polygonal rendering, such that both the look of rendered polygons and the amount of polygons that can be rendered in a given amount of time can vary greatly between different machines.  The look of polygons could be made consistent by rendering the polygons purely through software, however, the rendering of polygons is very CPU intensive, so may be impractical since less polygons can be drawn, and the CPU has less bandwidth to perform other functions, such as game logic and collision detection.

Other bottlenecks include CD drives, CPU speeds, co-processors, memory access speeds, CPU caches, sound effect capabilities, music capabilities, game controllers, and modem speeds to name a few.

Although many PC video game programmers have made valiant attempts to make their games adapt at run-time to the computers that they are run on, it is difficult for a developer to offer much more than simple cosmetic enhancements, audio additions, or speed improvements.  Even if the developer had the game perform various benchmark tests before entering the actual game code, it would be very difficult, and not to mention limiting to the design of a game, for the developer to write code that could efficiently structurally adapt itself to the results of the benchmark.

Which button fires?

A subtle, yet important problem is the large variety of video game controllers that have to be supported by the PC.  Having a wide variety of game controllers to choose from may seem at first to be a positive feature since having more seems like it should be better than having less, yet this variety actually has several negative and pervasive repercussions on game design.  One problem is that the game designer can not be certain that the user will have a controller with more than a couple of buttons.  Keys on the keyboard can be used as additional “buttons”, but this can be impractical or awkward for the user, and also may require that the user configure which operations are mapped to the buttons and keys.  Another problem is that the placement of the buttons with respect to each other is not known, so the designer doesn’t know what button arrangement is going to give the user the best gameplay experience.  This problem can be somewhat circumvented by allowing the user to remap the actions of the buttons, but this isn’t a perfect solution since the user doesn’t start out with an inherent knowledge of the best way to configure the buttons, so may choose and remain using an awkward button configuration.  Also, similar to the button layout, the designer doesn’t know the shape of the controller, so can’t be certain what types of button or controller actions might be uncomfortable to the user.

An additional problem associated with game controllers on the PC is that most PC’s that are sold are not bundled with a game controller.  This lack of having a standard, bundled controller means that a video game on the PC should either be designed to be controlled exclusively by the keyboard, or at the very least should allow the user to optionally use a keyboard rather than a game controller.  Not allowing the use of the keyboard reduces the base of users that may be interested in buying your game, but allowing the game to be played fully using the keyboard will potentially limit the game’s controls, and therefore limit the game’s overall design.

Of course, even if every PC did come bundled with a standard game controller, there would still be users who would want to use their own non-standard game controllers.  The difference, however, is that the non-standard game controllers would either be specific types of controllers, such as a steering wheel controller, or would be variations of the standard game controller, and would therefore include all of the functionality of the original controller.  The decision to use the non-standard controller over the standard controller would be a conscious decision made by the user, rather than an arbitrary decision made because there is no standard.

Chasing a moving target

Another problem associated with the PC’s evolutionary ability is that it is difficult to predict the performance of the final target platform.  The development of video games has become an expensive and time consuming endeavor, with budgets in the millions, and multi year schedules that are often unpredictable.  The PC video game developer has to predict the performance of the target machine far in advance of the release of the game, which is difficult indeed considering the volatility of schedules, and the rapid advancements in technology.  Underestimating the target can cause the game to seem dated or under-powered, and overestimating the target could limit the installed base of potential consumers.  Both could be costly mistakes.

Extinction vs. evolution

While PC’s have become more powerful through continual evolution, video game consoles advance suddenly with the appearance of an entirely new console onto the market.  As new consoles flourish, older consoles eventually lose popularity and fade away.  The life cycle of a console has a clearly defined beginning:  the launch of the console into the market.  The predicted date of the launch is normally announced well in advance of the launch, and video game development is begun early enough before the launch so that at least a handful of video game titles will be available when the console reaches the market.  The end of a console’s life cycle is far less clearly defined, and is sometimes defined to be the time when the hardware developer of the console announces that there will no longer be any internal support for that console.  A more practical definition is that the end of a console’s life cycle is when the public quits buying much software for that console.  Of course, the hardware developer would want to extend the life cycle of a console for as long as possible, but stiff competition in the market has caused hardware developers to often follow up the launch of a console by immediately working on the design of the next console.

Each and every one is exactly the same

Unlike PC’s which can vary wildly from computer to computer, consoles of a particular model are designed to be exactly the same.  Okay, so not exactly the same, but close enough that different revisions between the hardware generally only vary in minor ways that are usually pretty minor from the perspective of the video game developer, and are normally transparent to the user.  Also, the console comes with at least one standard game controller, and has standardized peripheral connections.

The general premise is that game software can be written with an understanding that the base hardware will remain consistent throughout the life-span of the console; therefore, a game can be tailored to both exploit the strengths of the hardware, and to circumvent the weaknesses.

The consistency of the hardware components allows a console to have a very small, low level operating system, and the video game developer is often given the ability to either talk to the hardware components directly, or to an extremely low hardware abstraction layer.

The performance of the components of the hardware is virtually identical for all consoles of a given model, such that the game will look the same and play the same on any console.  This allows the video game developer to design, implement, and test a video game on a small number of consoles, and be assured that the game will play virtually the same for all consoles.

CLICK HERE FOR PART 2

If you liked this post, follow me at:

My novels: The Darkening Dream and Untimed
or the
video game post depot
or win Crash & Jak giveaways!

Latest hot post: War Stories: Crash Bandicoot


Related posts:

  1. Crash Bandicoot – An Outsider’s Perspective (part 8)
  2. Making Crash Bandicoot – part 5
  3. Crash Bandicoot as a Startup (part 7)
  4. Making Crash Bandicoot – part 4
  5. Making Crash Bandicoot – part 1
By: agavin
Comments (29)
Posted in: Games, Technology
Tagged as: Andy Gavin, Application programming interface, Central processing unit, Crash Bandicoot, Crash Bandicoot 2: Cortex Strikes Back, DirectX, Jak and Daxter, Naughty Dog, Personal computer, pt_crash_history, Stephen White, Video game, Video game developer, Video Games

iPad 2 – Less is More

Mar25

Second Generation iPad

Being the consummate gadget man, I succumbed to the iPad 2 upgrade. In fact, I even ordered it at 1:01am, only 1 minute after they went on sale (at the Apple online store). Despite my jumping on the bandwagon, it took 13 days to come too. Mostly because I got a 3G model and those were slow to ship.

In any case, over the last year I have been pleasantly surprised at how incredibly useful the iPad is. I’ve already written one article about it, which is all still true. I owned a kindle before the iPad and found that to be of very limited use. Primarily it was good for long vacations where I previously would have dragged 20-30 paperbacks (weighing down my suitcases). With the kindle, just one little device covered that. And the thing had a tremendous battery life. But reading on it was annoying, mostly because the page turning was so slow and the screen only held about 60% of s single paperback page.

Enter the iPad. Seemingly just a giant iPhone, it’s actually radically different. As a book reader it holds a full page, and it’s fast. You can flick back and forth fast enough that it’s “browsable.” This was excruciating on the kindle. The screen is a little harder on the eyes, and the battery life only 10-12 hours instead of weeks, but the speed and size are more important to me. Plus, when you get an email, or feel the obsessive compulsive need to check today’s blog stats, you can just flip over instantly (IOS 4.2 on — so useful I was running the beta for months). It’s also just a darn comfortable way to do all your casual computer crap in bed, in the kitchen, watching tv, etc. There are a number of reasons why. Unlike even a laptop, it’s instant on, you can tuck it in the couch and grab it when an email comes in or you feel the need to look up actors on imdb (which I now do constantly). The battery life is such that as long as you charge it while you sleep, you can do whatever the hell you want with it during the day and not worry. This is so not true of any laptop, including the amazing MacBook pros and airs with their long battery life. You still have to plug them in if you are going to use them all day. The iPad isn’t a necessity, but it sure is convenient.

The First Generation, in a Tuff-luv case

Now as to the iPad 2. If you don’t have an iPad and are at all interested (plus have the disposable $500-829). Get one. The first gen ones are going on sale cheap now too. But if you already own a first gen iPad, it’s more about personal tolerance for being slightly outdated. The new one doesn’t do anything the 1st can’t except for video chat. But it is thinner, lighter, and about twice as snappy. For me, that alone is worth it. As I said, I’m a gadget freak and I use the pad all the time, everyday. The thinness and weight are noticeable, as is the speed. It’s certainly snappier. Apps load faster, the muitasking flips between apps much more smoothly. Not that the first iPad was slow, but this is faster. If you are into the games the GPU is supposedly 9x faster. Infinity Blade and the like seem very zippy now, and they weren’t bad before.

One other thing worth mentioning is the developer only multitouch gestures added to iOS 4.3. Now to use these, you have to connect the iPad to your Xcode 4 enabled Mac and turn on developer mode. This is a free download for devs, or a $5 purchase from the new Mac AppStore. I’ve only been using these for a few days but they’re awesome. Here’s yet another example of how Apple likes gets the little things right. There are 4 gestures. One to bring up and down the multitasking bar. Another to go back to the home screen, and a pair to flip back and forth between apps. It’s surprising how convenient and natural these are.

I haven’t gotten used to the subtle button changes on the new iPad yet. There is more angle to the bevel and this gives the physical controls, including the docking jack, a slightly increased inset, but I’m sure in a couple of days they’ll seem normal.
I got one the the crazy new covers too. I love the cool magnetic lock and the auto turn on / turn off feature. We will see how well the cleaning component does. The thing is ultra slim and light in the cover, particularly compared to the cushy but bulky full leather case I had on the old one. But on the other hand it’s a bit slick, and I’ve already fumbled it once and certainly don’t want to drop it. I might have to see if someone sells some sticky little tape/decal. That was a nice thing about my old case.  I have a thin sticky rubber case on my iPhone 4 for just for the texture.

All and all the iPad 2 is like everyone says, a typical Apple evolutionary tuneup to an already brilliant product. Certainly it’s better in nearly all ways, and the combination of Apple design, software, and heavy vertical integration makes it hands down the only tablet worth considering. I’m writing this blog post on it while out on the town, and while theoretically I could do that on my phone, I never would.

My previous iPad article can be found HERE.

Side by Side

The thickness

 

Related posts:

  1. Why the iPad is a Document game changer
By: agavin
Comments (10)
Posted in: Technology
Tagged as: Apple, AppleStore, IPad, iPhone, MacBook Pro, Macintosh, Tablet Computer, United States

Bleeding Violet

Mar24

Title: Bleeding Violet

Author: Dia Reeves

Genre: Paranormal YA

Length: 84,000 words, 454 pages

Read: March 14-20, 2011

Summary: Unique, good, and very different.

_

This is a weird weird book, and I mean that in a good way. Nominally, it’s about a schizophrenic girl, Hanna, who’s dad has died and who decides to move in unannounced with her mom she’s never met. But her mom doesn’t live in a normal town. She lives in some kind of weird place in Texas where gates between universes have let all sorts of strange monsters and realities in. A town with its own supernatural police.

The voice here is really fun. It’s first person past, but with a sort of cavalier devil-take-care crazy-girl style. I liked it. Some sentences were fantastic (both literally and figuratively). Not exactly in the lyrical kind of way that you might expect, but because of their deft wit, and quick and creative way of describing utterly fantastic goings on.

Because this book is FILLED, PACKED, STUFFED, with weird monsters and magic. Reeves uses the protagonist and POV character very deftly to explain it, or mostly just show what happens. She doesn’t feel the need to combine herself to easy concepts either. For example, sound sucking, student grabbing, invisible squids live inside the high school windows and one of the characters defeats them with a deck of playing cards! It’s a tribute to her skill that I could follow nearly all of this stuff. And it’s compact too, not being a very long book and containing dozens of strange encounters. The descriptions are lean but vivid. Occasionally she violates POV slightly on the side of clarity because the protagonist is new to this stuff and she explains it with a bit more understanding than she might be expected to have. But this isn’t very noticeable. Now I do wonder if someone with less experience reading speculative fiction in all its forms might have trouble with this novel. I mean, I’ve read A LOT (5000+ speculative novels), and played hundreds if not thousands of video games with magical systems etc. We won’t even count the movies and TV shows. Certainly someone who likes their reality… well… real, would be put off by the book. I wasn’t. The supernatural flavor was really interesting and unique, reminding me ever so slightly of something like the eerie Lost Room, or the wonderful but very out of print Marianne series by Sherri S Tepper.

The choice of using such a fractured POV character was interesting. There could be an argument that the entire book was some sort of delusion. I myself just treated Hanna’s view point as literal, and everything she saw as factual. The protagonist, and some of the other characters for that matter, don’t feel entirely real. They aren’t cardboard per se, as they feel well rounded, they just have a bit of surreal style to them that comes from their rather depressed moral compass. There’s a lot of killing and murder in this book, often horrifically grisly in fact, and no one seems to care too much. One of your best friends has been impregnated by evil demon spawn who are eating her from the inside out, well, just cut them out and leave her bleeding to die. That sort of thing. It works in the story, but if you stop and thing about the reactions any non psychopathic person might have… These characters just move on. It didn’t  really bother me in the context of this story as the narrator’s view point tends to whitewash away the consequences.

There’s also a good bit of cavalier sexuality — a welcome break from the self censorship that seems to be the norm since the 90s. Hanna is certainly open minded in that regard, and likes to take off her clothes. Unfortunately 🙂 there isn’t a lot of detail, like most everything else in the book a lot is left to the imagination. This is also part of the trend. To tell the truth Judy Blume’s Forever (1975) is still the most explicit teen book I can remember.

Overall, this is a great book, but it’s much more FANTASTIC than your typical paranormal. Being a fantasist, that was more than fine with me.

Related posts:

  1. Book Review: White Cat
  2. Book Review: Lost It
By: agavin
Comments (0)
Posted in: Books
Tagged as: Bleeding Violet, Book, Book Review, Book Reviews, Dia Reeves, Fiction, Forever, Judy Blume, Lost Room, Marianne, Narrative mode, Paranormal, Paranormal romance, Playing card, Protagonist, Texas, YA, Young-adult fiction

Fraiche Santa Monica

Mar23

Restaurant: Fraiche Santa Monica [1, 2]

Location: 312 Wilshire Blvd., Santa Monica, CA 90401. Phone : 310.451.7482

Date: March 19, 2011

Cuisine: Cal French Italian

Rating: On the way up.

_

This particular location adjacent to the Barnes and Noble on Wilshire near the promenade has a fairly checkered past. Two or three years ago the Fraiche group turned it into Riva. This was supposed to be a coastal Italian, but to my taste wasn’t really Italian at all — although they made a decent Pizza. In any case, it failed and they rebooted it as Fraiche Santa Monica with an entirely new menu and staff, albiet an identical interior. This is sort of a spin off of the Culver City location (REVIEW HERE).

One corner of the back room. I didn’t have much of a wide angle lens (food after all). It’s a pretty nice space.

The wine by the glass list.

“Bourgogne Pinot Noir, Les Chapitres de Jaffelin, Burgundy, 2009.” As a burghound this was about the bare limit of drinkability for Pinot Noir. A little sour and acidic and decidedly unbalanced. But then again, I rarely expect much from “Bourgogne” (Burgundy which is not AOC to a particular village or vineyard).

The bread was hot out of the oven, and very nice and crunchy. Oilve, mashed and oiled.

Today’s menu. This is actually the second time I’ve eaten at Fraiche SM (I did so once right after they opened) and in the meantime they have moved the menu to be much closer to the new one at Fraiche Culver City (detailed review of that here).

“POACHED PEAR SALAD, Endive, baby wild arugula, candied walnuts, Point Reyes blue cheese, red wine vinaigrette.”

“Baby Beets, House Made Ricotta / Orange / Pistachio.” Sweetness of the beets meshes with the cheesy sauce. Beet salads have become very passe, but when well done (like this one), I like them.

“ROASTED PEPPERS ARUGULA & BURRATA, Shallots, 12 year old balsamic and extra virgin olive oil.” This was as good a Burrata as I’ve had at a restaurant. They still aren’t quite as sensual as my own take on the cheese.

“Valpolicella Ripasso, Classico Superiore, David Sterza, Veneto, 2008.” Much better than the generic Burgundy. This was a fine wine of the type. Grapey, but not as much so as an Amarone.

“MUSHROOM RISOTTO, Arugula, Pine Nuts, Pecorino.” Nice nutty, mushroomy risotto.

“AGNOLOTTI, Mushrooms,  mascarpone, truffle butter.” These are really good. The pasta is nice fresh egg pasta. It tastes mostly of butter and mushroom. Butter!

“GARGANELLI, Mushroom Bolognese, Parsley, House Made Ricotta.” I actually expected this to be a meat pasta, but it’s vegetarian with the “ragu” being made from mushrooms. It was tasty, particularly the ricotta which, being homemade, was more like a real Sicilian Ricotta than one usually gets here. The mushrooms leant it a fairly rich taste, but it wasn’t heavy at all (like a meat one would be).

“Rigatoni, Beef & Pork Ragù / Scallion / Gruyère.” This one was great. basically a Bolognese, but really good. Close even to one of my ultimate pasa favorites, the lamb ragu at Capo (SEE HERE).

We were too full for desserts but Fraiche has really good ones, so I snuck in a photo of the Budino from a trip to the culver city joint. You can look there for a bunch more dessert photos. The dessert menu is nearly identical.

“Carmel Budino, Vanilla Mascarpone, Sea salt.” Mildly carmel/creamy with that nice salt factor. Good, but not quite as good as the similar dessert at Gjelina (SEE HERE).

Fraiche SM seems to be settling into its groove. It was better than last time, and quite a bit better than Riva. It isn’t a lot different than the Culver City location, but the menu is slightly smaller, and missing the assorted “pots of stuff” that are fairly unique over there. It does still have the very good fresh pastas. I need to try I nice meaty one.

Related posts:

  1. Fraiche take on Franco-Italian
  2. Finally, Modern Dim sum in Santa Monica
  3. Piccolo – A little Italian
  4. The New Cal Cuisine: Rustic Canyon
  5. Quick Eats: Divino
By: agavin
Comments (5)
Posted in: Food
Tagged as: AGNOLOTTI, Burrata, California, Culver City California, Dessert, Eruca sativa, Food, Fraiche, Fraiche Santa Monica, Italian cuisine, Los Angeles, Olive oil, pasta, Restaurant, Restaurant Review, Salad, Santa Monica California, side, vegetarian, Wine

TV Review: Downton Abbey

Mar22

Title: Downton Abbey

Genre: Historical (England 1912-1914)

Watched: March 14-19, 2011

Status: First Season (second coming fall 2011)

Summary: Great Television!

 

My parents, as lifelong anglophiles and Masterpiece Theatre viewers, recommended this British TV series set in 1912-1914. It wasn’t a hard sell once I read the blurb, and I’m so glad we watched it. This is really fine television.

Downton Abbey is a fictional great English country estate, owned by the middle aged Earl of Grantham. He has a loving wife and three daughters, not to mention about 30 assorted housekeepers, maids, footmen, and the like. What he doesn’t have is an heir, as his cousin, the closest male relative went down with the Titanic. The major family drama here is the conflict between the complex English system of inheritance (and this earl’s specific case) and the circumstances. The playground is an anything but simple household that contains no less than 20 major cast members.

No show or movie I’ve ever seen before so intimately details complex organization of great estate like this. I’m always fascinated by the evolution of everyday living (for rich and poor alike) and anyone who thinks the rich keep on getting richer ought to see this. And then remember that a 100 years earlier a house like this would have had five times the servants. Also dominant are the politics and different roles of the various staff and family members. 1914 is the end of an era, as the double whammy of World War I/II will shatter the aging remains of Europe’s cast system like a crystal vase dropped off the Empire State Building (HERE for some of my thoughts on that). In any case, this series is to a large extent about this particular moment, so indicative of the long history of social change. We have employee rights, women’s franchise, choice in marriage and family, even the availability of healthcare and the installation of the telephone.

But that’s not what makes it good, merely interesting. What makes it good is the phenomenal writing and acting. Maggie Smith (younger viewers will know her better as Professor Minerva McGonagall) is a standout as the reigning Earl’s crotchety old mother, but the entire cast is great. For this many characters, they are each highly distinct and multidimensional. Some you love, some you love to hate, but they all make it entertaining. Downton Abbey is not a series about sudden murders or gratuitous brothel scenes like the great HBO dramas (and I love those too!), but instead a series of intertwined character studies that reveal their era as well as timeless facets of human nature.

So unless you thought Transformers 2 was high entertainment, go watch!

Related posts:

  1. Book and TV Review: Dexter
By: agavin
Comments (4)
Posted in: Television
Tagged as: Downton Abbey, drama, Earl of Grantham, Edwardian era, England, Gosford Park, HBO, Julian Fellowes Baron Fellowes of West Stafford, Maggie Smith, masterpiece, PBS, Television program, Titanic, World War I

Not so Glad about Gladstones

Mar21

Restaurant: Gladstones Malibu [1, 2]

Location: 17300 Pacific Coast HwyPacific Palisades, CA 90272. (310) 454-3474

Date: March 18, 2011

Cuisine: American Seafood

Summary: Fast bordering on brusque

 

Two months to the day after I tried the slightly revamped Gladstones (REVIEW HERE), we decided to go back for a dinner. The two of us walked in the door at 6:30. They had us walking out at 6:55. A new record in whirlwind service. So fast that I was in physical pain, my stomach in spasm from having wolfed down the food.

“Dragonfruit mojito.” This concoction was disgustingly sweet, tasting of artificial strawberry and whatever weird kind of fruit is baked into “dragonfruit bacardi.” I dislike this trend of overzealous corporate marketing and lazy bartenders in which drinks are made with “flavored” alcohols instead of actual mixers. There is really no circumstance where this kind of factory flavored drink tastes better than just mixing. It is “easier.” Like pre-mixed Chernobyl green margarita mix. As the rest of the cocktails were this sort, I moved on to a glass of wine.

Bread. Warm sour-dour. Nothing to complain about here.

“CRAB CAKES  Remoulade, Arugula & Fennel.” These arrived as the words of our order were hanging in the air, but they were tasty enough. Not on the level of either the Houstons or Capo crabcakes, but respectable.

“BAKED SALMON CARTOCCIO  Saffron Potato, Roast Fennel, & Olive Herb Tapenade.” Must like what we made at our own dinner party the week before. But not bad either. Too bad it arrived while I was still working on my appetizer — and I’m a fast eater.

“CURRIED COCONUT SHRIMP  Jasmine Rice, Yellow Curry, Thai Basil & Passion Fruit.” I had ordered this two months before and enjoyed it immensely. Something was really wrong with it tonight. Maybe the fact that they cooked it in 3 minutes flat? The sauce was totally out of balance. The curry flavor very muted and the lime massively dominant. It just didn’t taste good that way, being almost unpleasantly sour.

The bus boy was pulling our plates from us as I was literally forking the last couple shrimp. I mean I had to reach into the air to get them. The waitress teleported over, asked if we wanted dessert, hearing the negative, slapped the check down. They weren’t rude or unpleasant, but it was all so rushed that I felt an almost compulsive need to hurry in order to match their pace. I don’t mind a fast dinner sometimes, but this was ridiculous.

More fundamental, I also worry about quality control in the kitchen. Things just seemed much more lackluster than the previous time I was here. And it’s very expensive — overpriced in fact. So I don’t think I’ll be back for a while.

Related posts:

  1. Quick Eats – Gladstones by the Sea
  2. Red Medicine the Relapse
  3. Dinner and Drinks at Tavern
  4. Quick Eats: Brentwood
  5. Fraiche take on Franco-Italian
By: agavin
Comments (3)
Posted in: Food
Tagged as: Australia, Chernobyl, Cocktail, Dinner, Fish and Seafood, Gladstones, Home, Restaurant, Restaurant Review, Seafood, side dishes, Thai Basil, vegetarian

Takao Two

Mar20

Restaurant: Takao [1, 2, 3, 4, 5, 6]

Location: 11656 San Vicente Blvd, Los Angeles, CA 90049. (310) 207-8636

Date: March 13, 2011

Cuisine: Japanese / Sushi

Rating: 9/10 creative “new style” sushi

_

I’ve already covered Takao in some detail HERE, but we went back (we go often) and I built another “custom omakase” trying some different things. The full menu and some information on the history of the place can be found through the above link.

House cold sake. Masumi “Okuden-Kanzukuri” Nagano prefecture.

Miso soup. I think if you ask they have a couple different types. This is the basic scallion and tofu.

Big eye tuna sashimi. This displays the fish at it’s finest.

Wild Japanese Scallop sashimi. I love good scallops. These had that pleasant meaty texture, and the soft “scallopy” flavor.

Tai (red snapper), with garlic, salt, red peppercorn, onions, olive oil. A very bright flavor, and the peppercorns, not spicy at all, add a nice textural component.

Main lobster tempura (1/2). Takao has a lot of interesting tempuras. Uni (my second favorite), sardine, crab, unusual seafood pancake with shiso, and more. This is a decadent favorite of mine, and in a half portion is pretty reasonable.

Rock Shrimp Tempura Dynamite. The underlying component is in itself tasty. Sweet rock shrimp perfectly fried. Then you ad some dynamite with it’s zesty zing and it gets even better. For those not in the know Dynamite is a warm sauce consisting of mayo, sriarcha hot sauce, and masago semlt roe.

This is a very traditional Japanese egg custard with bits of mushroom, shrimp, and white fish baked inside. It has a very subtle mellow eggy flavor I find nostalgic from my many trips to Japan.

Just some of the sushi.

In the very front, Wild Japanese Scallop sushi. Behind that next to the wasabi is Tai (red snapper).

In the back, chu-toro (fatty tuna belly). Melts in your mouth!

Salmon of course.

Kanpachi (young yellow tail).

In the center, Ika (squid), perfect chewy pasty texture.

And fresh raw Tako (octopus). Most places serve it only frozen/cooked. This had a bit of yuzu on it, delicious.

On the left, Ikura (salmon eggs), and on the right Uni (Santa Barbara Sea Urchin). Both delicious.

Albacore with a bit of ginger and scallions.

Salmon tempura cut roll (technically for my two year old).

A bit more sushi. In the back grilled Unagi (fresh water eel) rolls, and Hamachi (yellowtail) and scallion rolls.

Kani (Alaskan king crab) sushi.

Tamago (sweet egg omelet) sushi.

And some vanilla mochi balls (ice cream covered with sweetened pounded rice). The red stuff is strawberry sauce.

Takao is top flight as always. I tend to enjoy ordering ala carte like this best, but it’s actually more expensive than getting an omakase, perhaps because I order a lot more sushi.

For my LA Sushi index, click here.

Related posts:

  1. Food as Art – Takao
  2. Sushi Sushi = Yummy Yummy
  3. Sasabune – Dueling Omakases
  4. Matsuhisa – Where it all started
  5. Food as Art: R.I.P. The Hump
By: agavin
Comments (6)
Posted in: Food
Tagged as: Black pepper, Dessert, Food, Hamachi, Japan, Japanese cuisine, Los Angeles, Restaurant, Restaurant Review, Sashimi, side dishes, Sushi, Takao, Tamago, Tuna, Uni, vegetarian, Yellowtail

Tithe – A Modern Faerie Tale

Mar19

Title: Tithe – A Modern Faerie Tale

Author: Holly Black

Genre: Paranormal YA

Length: 66,000 words, 310 pages

Read: March 13, 2011

Summary: Well written and evocative.

 

This is the second Holly Black book I’ve read. I enjoyed White Cat (REVIEW HERE) a lot and so I went back to read her debut novel. And liked it even more.

The similarities are striking. Both are short YA books, with nice prose and likable main characters thrown into ‘weird’ paranormal situations. Both have the action so condensed as to occasionally be confusing. Both wrap themselves up in the last quarter in a way that compromises the believability of the secondary characters. Both have unhappy but not completely tragic endings. While White Cat’s premise is perhaps a tad more original, I found Tithe‘s creepy fairy flavor more to my taste. Not that I didn’t like the first, but I really liked certain things about the second.

Tithe is written in third person past, with the protagonist Kaye dominating the POV. Mysteriously, approximately 5-10% is from the point of view of her friend Corny, and about 2% from the romantic interest. These outside POVs felt wrong, and at least in the Kindle version, no scene or chapter breaks announced the transitions. Every time one happened I was confused for a paragraph or two and knocked out of the story. Still, said story was more than good enough to overcome this minor technical glitch.

Kaye is an unhappy 16 year-old with a loser mom. When they move back to New Jersey she is rapidly involved with the Fey, discovers she’s a green skinned pixie, and gets drawn into a conflict between the Seelie and Unseelie (rival fairy) courts. It’s a fun read, and the prose is fast and evocative of the fey mood. Ms Black seemed to have done at least some research and the feel is quite good. The loose descriptive style sketches some rather fantastic creatures and scenarios, and that works. There is some darkness (which I like), and wham bam death of secondary characters without the proper emotional digestion. There is sexuality, but no sex (boo hiss!).

But I really like the way she handled the fairies. There isn’t a lot of description, but what there was left me filling in my own detailed, sordid, and mysterious collage of imagery.

I was loving the first two third of the book, and then it pivoted a bit and lost me a little. Don’t get me wrong, I still liked it, but the last third felt sketchier. The author had a bunch of double takes and betrayals on her outline, and it felt to me that it didn’t really matter if the secondary characters got to be true to themselves — they just followed the script. The protagonists best friend dies in like two seconds, and there is barely any reaction. Everyone also seemed to roll way too easily with the rather gigantic punches (as in Fairies are real). And to be darn good at picking up new powers in no time at all. This is a typical issue, and very hard to address perfectly, but it always bugs me when magic seems too easy. White Cat had the same final act issues.

It’s still a fun book — way above average — with nice prose and breakneck pace. But the potential for great gave way to merely very good.

Related posts:

  1. Book Review: Across the Universe
  2. Book Review: XVI (read sexteen)
  3. Book Review: Lost It
  4. Book Review: The Windup Girl
  5. The Name of the Wind
By: agavin
Comments (4)
Posted in: Books
Tagged as: Amazon Kindle, Arts, Book, Book Review, Fairy, Fiction, Holly Black, New Jersey, reviews, Tithe, Tithe: A Modern Faerie Tale, Tithing

Seconds at Sam’s by the Beach

Mar18

Restaurant: Sam’s by the Beach [1, 2, 3]

Location: 108 W. Channel Rd.(PCH), Santa Monica, CA 90402. 310-230-9100

Date: March 12, 2011

Cuisine: Cal French International

Rating: Stellar food and unparalleled service.

ANY CHARACTER HERE

I already covered the background to Sam’s in my FIRST REVIEW. Let’s just say this is a local place with an unusual and inventive menu that’s worth a drive.

I’d never heard of this “lesser” Bordeaux, but Sam opened this half-bottle and it was very nice. Characteristic Saint-Emilion smooth. The 8 or so years gave it just enough age to settle the tanins.

Today’s menu.

The usual amuse. Little fried pockets of spinach and cheese.

Homemade bread and the olive oil sesame dip.

“Roasted Beet Salad, mixed with onions and tomato in Aged balsamic dressing, served with Feta Cheese croquet.”

This was a special. Seared Kanpachi (young yellowtail) with arugula, avocado, tomatoes, in a citrus ginger vinaigrette. The dressing was to die for, and mated perfectly with the sushi grade fish.

“Vegetarian Crepes. Homemade Crepes filled with Swiss chard, wild mushrooms and zucchini served in tomato coulis.” This is a half order, as the normal one has two of the burrito-like crepes. This is a very nice vegetarian option, and surprisingly hearty. The sauce is bread dippingly yummy.

“Lamb Chorizo Risotto, Carnaroli rice prepared with lamb sausage, fresh spinach, feta cheese, in meyer lemon broth.” This isn’t your typical Italian Risotto either, but it’s spectacular, and much lighter. There is a lovely tang from the lemon, and the sharp goat cheese, and the sausage is to die for.

The dessert menu.

His creme brulee is straight up traditional, and it’s the second best I’ve ever had in the world (there was this one in Avignon…). The meat of it is thick, creamy, and all vanilla.

A new dessert (at least for us). This take on the flowerless chocolate cake is moist, dense, and chocolately — as it should be.

Sam is also starting a new thing for Sunday nights, pizza night!  He has a pizza oven. We’ll have to come back and try these, see how they compare to my Ultimate Pizza. I’m particularly eager to try the Shawarma.

Related posts:

  1. Food as Art: Sam’s by the Beach
  2. Brunch at Tavern 3D
  3. Brunch at Tavern – again
  4. Rustic Canyon 3D
  5. Dinner and Drinks at Tavern
By: agavin
Comments (3)
Posted in: Food
Tagged as: Business and Economy, California, Carnaroli, Cheese, Cooking, Crêpe, Dessert, Feta, Food, Los Angeles, Restaurant, Sam's by the Beach, Santa Monica California, side dishes, Spinach, United States, vegetarian

Book Review: White Cat

Mar17

Title: White Cat

Author: Holly Black

Genre: Paranormal YA

Length: 76,000 words, 310 pages

Read: March 12, 2011

Summary: Well written, fun, but a little contrived.

 

This is yet another foray into the world of paranormal YA (I am, after all, doing research for my own writing). Holly Black is a but best selling YA and MG author. This book, unusually, has a male protagonist, and he’s part of a family of “curse workers,” although he himself doesn’t do any magic. He lives in an alternative reality where a small minority of people are able to “lay on hands” in a bad way and curse people. They are known to society, it’s even illegal, and formed into criminal gangs in the 1930s just like the Mafia.

The premise is decent, although I’m not a fan versions of our reality with outed paranormal groups. I didn’t really buy the changes at a social level. The whole existance of this kind of power in volume would throw everything off, and here the only real social change is that everyone wears gloves (because it’s through bare skin that the magic works). We are reminded often of the glove factor.

The writing is very solid and straightforward, in first person present. So straightforward it took me awhile to even notice the tense. Or maybe writing it myself is acclimating me to it. The protagonist is likable and felt fairly real, although maybe not all of his decisions did. And I didn’t really feel the proper weight of his emotions. Big things happen, but without big feelings. By page three or thereabouts we discover he murdered his girlfriend. We’re supposed to still like him. And we do, but mostly because it’s totally obvious that he didn’t REALLY murder her, he only thinks he did. Oh and we quickly hear about the one flavor of curse worker that’s REALLY rare. And guess who’s from a magical family and doesn’t have any power…

But I enjoyed the book — quite a bit — I read it in half a day after all. Another book I attempted to read that same morning was so execrable that I only made it to fifty pages, so this was a vast improvement.

A couple other beefs. At times the writing was so lean that I felt like I missed something in the action and had to page back to find it — but it wasn’t even there. Now, it was then obvious moving forward what had happened, it just seemed that the attempt at leanness and/or agressive editing had taken the edge off the clarity. Then as we moved into the second half we hit the “after the big reveal” syndrome which many books with reveals often suffer from. I’ve mentioned this before (like HERE or HERE), but basically this is where after the big shocker no one really seems to act with appropriate emotional gravitas. I’m used to it, and it’s a tough problem to solve, so I moved on to the ending.

Which was the weakest part. Everything juggled into place such that the people were served the plot rather than their character. The plot wasn’t bad, it’s just that I didn’t really see some of the characters acting like they did.

Overall, the story was fast and fun. As I said Ms Black is a skilled writer, and the prose zipped along, with nice quick descriptions, and she isn’t afraid to be a bit dark or sexy (considering it’s YA). The gratuitous twist on the last two pages bugged me, but I ordered the sequel (which the Twitter/FB buzz says is very good) and another of the author’s books.

How different these neat little package YA books are from a meaty tome like The Wise Man’s Fear (which I finished the same day). There are subplots in that book about the size of this entire story.

For a review of Holly Black’s first novel, Tithe: A Modern Faerie Tale, click here.

Related posts:

  1. Book Review: Lost It
  2. Book Review: The Windup Girl
  3. Book Review: Across the Universe
  4. Book Review: XVI (read sexteen)
  5. Book Review: The First American
By: agavin
Comments (4)
Posted in: Books
Tagged as: Book, Book Review, Fiction, Holly Black, Paranormal, Paranormalcy, Protagonist, The Curse Workers, United States, White Cat, Writing, Young-adult fiction

Game of Thrones – The Houses

Mar16

With the premier of Game of Thrones, the HBO series based on what is perhaps my all time favorite Fantasy series, fast approaching, the network has been releasing all sorts of goodies. Now I’ve posted about this before, but these books, and it looks like the show, are so darkly delicious that I fell I must share.

Power (above) is new trailer.

Fear and Blood (above) is another new trailer for the show in general.

Then we have a whole series of videos on some of the most important Great Houses. Like Dune before it, Game of Thrones is a story about the interplay of politics and loyalty among a number of great factions. This was frequently true during the late middle ages, and to some extent the series is based a bit on the War of the Roses.

The Starks (above) are the moral center of the story.

House Baratheon holds the throne… for now.

The Lannister’s you love to hate — except for Tyrion who rules.

House Targaryen knows all about dragons.

Above is a more detailed video on Jaime Lannister.

and above Robb Stark.

Above is Littlefinger.

and above about the world in general.

For a review of episode 1, click here.

Related posts:

  1. Inside Game of Thrones
  2. Book Review: The Windup Girl
  3. The 80′s revisited: Miami Vice
  4. Book and TV Review: Dexter
  5. Short Story: The Merchant and the Alchemist’s Gate
By: agavin
Comments (9)
Posted in: Books, Television
Tagged as: A Song of Ice and Fire, books, Dune, Fiction, Game of Thrones, George R. R. Martin, HBO, Late Middle Ages, Major houses in A Song of Ice and Fire, reviews, Sean Bean, Television, War of the Roses, Wars of the Roses, World of A Song of Ice and Fire

Dinner Party – It all starts with Cheese

Mar16

Last Friday we hosted a little dinner party. I can’t say it was purely an excuse for more cooking and food photos, but well, here they are. Everything in this meal is made from scratch.

The first course in summary.

Cheese is always a good start. This time I tried a new cheese shop, Andrew’s Cheese Shop. This is closer than my usual haunt, the The Cheese Store of Beverly Hills. Andrew’s isn’t as big, but they had plenty of choices, and they were extremely friendly.

I put together a little foursome. Epoisses on the left (gooey washed rind fun), a fantastic goat, Monte Enebro, a nice rich nutty dutch cheese on the left (tasted halfway between a Gouda and Parmesan) and on the bottom, Stichelton, a beautiful rich English blue cheese.

Condiments. Marcona almonds, quince paste (the orange jelly stuff), Spanish olives, and accacia honey from Abruzzo.

The carbohydrates. Traditional french baguette, cranberry nut crisps, and olive oil cracker sticks. All from Andrew’s, and all excellent.

We also made these puff pastries from scratch. Stuffed with egg, cheese, and spinach. Basically little puff-Spanakopita.

What would all that cheese and bread be without some wine?

On the left a fantastic Burgundy, Parker gives it 92, but I’d give it more like a 94. “The 2003 Clos Vougeot explodes from the glass with licorice, dark cherries, and a myriad of spices. A wine of considerable depth, it is packed with suave black fruits immersed in chocolate. Well-structured, ripe, and exceptionally long, it will merit a higher score if its alcoholic warmth is absorbed into the wine with time (something that sometimes occurs with Pinot Noirs). Projected maturity: 2008-2017.”

On the right, earning 90 points (and again I’d give it more), “The 2006 Fonsalette Cotes du Rhone exhibits meaty, herbal, tapenade, pepper, animal fur, and damp earth-like notes. It is soft, round, lush, and best consumed over the next 10+ years.”

For the main course, we went with Salmon en Papillote, adapted from a recipe by non other than Julia Child. All done from scratch.

Sealed in with the juices are julianned vegetables, parsley, basil, garlic. We’ve done this before but tis batch turned out absolutely perfect.

And as the starch, couscous adapted from Houstons (see it HERE). I found a recipe on the web approximating what they do at the restaurant (HERE).

And then salad.

And this delicious but rather un-homogenized walnut vinaigrette (from scratch of course).

Then for dessert, our friend Geo’s Chocolate Ganache tart. He very graciously gave us this recipe after some prying, and it’s a terribly excellent and decadent dessert. Mostly it’s butter, sugar, and 70% cacao chocolate. Oh yes!

Then homemade whipped cream. None of those emulsifying agents. And homemade raspberry sauce (rasberries and sugar thrown in the blender).

And fruit to finish.

Related posts:

  1. Dinner and Drinks at Tavern
  2. Ultimate Pizza – The Birthday
  3. Saturday is for Salt
  4. Lasagne Bolognese Minus the Meat
  5. Quick Eats: Osteria Latini 2
By: agavin
Comments (1)
Posted in: Food
Tagged as: Abruzzo, Almond, Cheese, Cooking, couscous, Dessert, Dinner Party, Food, Goat Cheese, Gouda, home cooking, Julia Child, Parmigiano-Reggiano, Pinot noir, side dishes, Spanakopita, Stichelton, The Cheese Store of Beverly Hills, vegetarian

Fast Food Sushi?

Mar15

Restaurant: Sushi-Don

Location: 970 Monument St #118, Pacific Palisades, CA 90272. (310) 454-6710

Date: March 9, 2011

Cuisine: Japanese / Sushi

Rating: Not bad for a $20 lunch.

 

There seems to be a strange trend going on right now where top sushi bars are opening fast food or light branches. Sushi-Don is in my hood, and it’s owned by, or in some way executive chefed by, Sasabune (see my first and second reviews of that here). Sushi-Don is a kind of fast food version of it’s big brother Sasabune, where you have a simplified menu and reduced prices.

The menu is on the wall.

I went for “Combo B” the soup, cut roll, and 5 pieces of standard sushi.

The miso soup is exactly as you’d expect.

You can chose any cut roll, I went for blue crab. Left to right we have Maguro (tuna), albacore x2 with two sauces, salmon, and Hamachi (yellowtail).

The sushi itself is fine. It’s not fantastic, being perhaps 80% as good as that at Sasabune. This is no Sushi-Sushi (REVIEW) either, but then again, the above was $12. It’s certainly not icky mall sushi, and the chef made it in front of me.

I also ordered a scallop hand roll, which was tasty enough. Could have used a touch of yuzu 🙂

Sushi-Don is what it is. You can have a little light sushi meal here for $15, or you could probably get stuffed for $30-35. The equivalent food at Sasabune or Sushi-Sushi or similar would be at least twice as much. Sure the quality is better there. But at Sushi-Don you can also be in and out in 15-20 minutes. So I think it fills a niche, when one is in a hurry, often alone, and just want a tasty quick bite to eat.

Sushi Nozawa has tried a similar concept with SugarFISH, but that for me is even less satisfying, as it isn’t actually much (if any) cheaper than a real sushi bar, and they’ve eliminated the chef. My colleague kevineats.com reviews that HERE.

Related posts:

  1. Food as Art – Sushi Sushi
  2. Sushi Sushi = Yummy Yummy
  3. Food as Art: Sushi House Unico
  4. Food as Art – Takao
  5. Food as Art: Sasabune
By: agavin
Comments (1)
Posted in: Food
Tagged as: Asian, Fast food, Food, Hamachi, Japanese cuisine, Maguro, Miso soup, Pacific Palisades Los Angeles, Restaurant, Restaurant Review, Sushi, Sushi-Don

The Wise Man’s Fear

Mar14

Title: The Wise Man’s Fear

Author: Patrick Rothfuss

Genre: High Fantasy

Length: 380,000 words, 1000 pages

Read: March 4-12, 2011

Summary: A worthy sequel.

_

The Wise Man’s Fear is one of 2011’s two most anticipated Fantasy novels, the other being George R. Martin‘s A Dance with Dragons (due in July). WMF, however, can be all yours right now. It’s the sequel to The Name of the Wind (which I REVIEW HERE). This is High Fantasy of a rather less epic sort. Not that it’s any less fun to read, even weighing in as it does at 1008 hardcover pages. Although, who thinks about pages these days, as I read the Kindle version on my iPad (wouldn’t want to mess up that nice hardcover first edition I had signed by Mr. Rothfuss last week!).

Despite the length, it’s well worth it. This book is seamless with the first in the series, despite the four years gap between their publication. I read The Name of the Wind a second time last week, and WMF picks up and continues with exactly the same style and pace. There is still the box story in the present, but this accounts for no more than 5% of the pages. The action mostly takes place in the past with our hero, Kvothe, continuing on for a bit at University and then venturing out into the wider world. While we sense that some bigger events are in the works, this is still a very personal tale. And it defies all normal story telling expectations in that it just meanders along. My editor’s eye says that whole chunks and side plots could be snipped out without effecting anything. And to a certain extent this is true. But would the novel be better for it? Perhaps it could have lost 50-100 pages in line editing, but I’m not sure I’d take out any of the incidents. As the novel itself says, it’s not the winning of the game, but the playing of it that matters.

That is very much what The Wise Man’s Fear is about. It’s a story about stories. It’s rich and lyrical, a luxurious tapestry of world and story, without the distraction of the intricate mechanism of plot. The little glimpses into different sub-cultures show a deft eye for details and invention. This feels like a real place, not so much explained, but revealed through the narrator’s eyes.

As Rothfuss said in an interview, Kvothe is  older now, and he gets himself into more trouble. There’s more sex and violence this time out, although the main romance is still endlessly unrequited 🙂 Kvothe it seems, is a hero of many talents, and that includes those in the bedroom. Rothfuss doesn’t focus on these details gratuitously, it’s not a book filled with battle (or bedroom scenes).

I’m curious to see how Rothfuss wraps this up in the third book (and I suspect the trilogy might expand). Things still feel early. We find out barely anything new about the main villains. In fact they don’t even show in this volume. Just like the first book the end is completely limp and anti-climatic. Kvothe just wraps his story up for the day and we wait (hopefully for slightly less than four years).

But I’ll be waiting. Probably for so long that I’ll have to read book one and two again. I won’t mind.

Related posts:

  1. The Name of the Wind
  2. Short Story: The Merchant and the Alchemist’s Gate
  3. Book Review: Lost It
  4. Book Review: The Windup Girl
  5. Book Review: XVI (read sexteen)
By: agavin
Comments (3)
Posted in: Books
Tagged as: Arts, Book, Book Review, books, Fantasy, Fiction, George R. Martin, High fantasy, Literature, Name of the Wind, Patrick Rothfuss, Wise Man's Fear

Dinner and Drinks at Tavern

Mar13

Restaurant: Tavern [1, 2, 3, 4]

Location: 11648 San Vicente Blvd, Los Angeles, CA 90049. (310) 806-6464

Date: March 10, 2011

Cuisine: Market driven Californian

Rating: Good for dinner too!

 

Every couple of months I get together with a group of friends who all have kids the same age for a “Dad’s night out.” Last time we went to Father’s Office, this time we chose Tavern in Brentwood. I’m generally there either for Brunch or for an early dinner so I was pleasantly surprised to see how jammed the bar was.

The cocktail menu. The bar was hopping big time at 8-9pm on a thursday. Mostly 30 something women too. A pack of cougars were on the prowl too.

“WildRover, Jameson’s Irish Whiskey, Fresh Basil & Tangerine.” This was a hell of a good cocktail. Like a whiskey sour with basil.

We pounded through 2 bottles of this pleasant CnDP, which Parker gives a 93. “The finest tradition cuvee yet made, the 2007 Chateauneuf du Pape (70% Grenache, 20% Syrah, and 10% Mourvedre aged in foudre and concrete tanks) possesses a deep ruby/purple-tinged color as well as a bouquet of black currants, black cherries, garrigue, pepper, and lavender. It is a full-bodied, ripe, exceptionally elegant, pure wine to drink now or cellar for 12-15 years.”

Tonight’s dinner menu. Never exactly the same twice.

“Salmon crudo with cerignola olives, cucumber, and meyer lemon.”

“Endive salad with Schaner farm’s citrus, green olives and fennel.”

“Cauliflower soup with truffle butter and marcona almonds.” This was a bit blander than I had hoped. Maybe it didn’t have enough creme, or salt. Still pleasant enough.

“Wild mushroom and leek tart with aged goat cheese and herb salad.”

“The devil’s chicken with braised leeks, onionsand mustard breadcrumbs.” Captain Picard, owner of L’Idiot says, “you can’t afford the duck, you’ll have the chicken!”

“Braised lamb shank with saffron rice, merguez, peppers and pinenuts.” This was a damn good dish. The meat fell off the bone (which could be gnawed viking style at leisure). The rice is Persian, and the whole dish had a vaguely Persian thing going on.

“Niman ranch rib-eye with potato-bacon gratin,red wine butter and arugula.”

The desserts du jour.

“Chocolate and coconut coupe, chocolate ice cream, coconut sherbet and graham crackers.” This tasted like its component ingredients, and that wasn’t a bad thing. Rich and refreshing at the same time.

““Snickers Bar, salted peanut caramel and vanilla ice cream.” Very nice dessert. Inside the hard dark chocolate shell was a kind of peanut and carmel mouse.”

As you can see Tavern ain’t no slouch at dinner time either. The dishes are inventive, rich, made with good ingredients, and tasty. You can find some of my brunch reviews HERE, HERE, or HERE.

Related posts:

  1. Brunch at Tavern 3D
  2. Brunch at Tavern – again
  3. Quick Eats: Brunch at Tavern
  4. Bistro LQ – 27 Courses of Trufflumpagus
  5. Son of Saam – Actually more Bazaar
By: agavin
Comments (6)
Posted in: Food
Tagged as: Brentwood, California, Chateauneuf du Pape, Cooking, Dessert, Dinner, Father's Office, Food, Grenache, Irish Whiskey, Los Angeles, Mourvèdre, Restaurant, Restaurant Review, side dishes, Soups and Stews, Tavern, vegetarian, Wine tasting descriptors
Older Posts »
Watch the Trailer or

Buy it Online!

Buy it Online!

96 of 100 tickets!

Find Andy at:

Follow Me on Pinterest

Subscribe by email:

More posts on:



Complete Archives

Categories

  • Contests (7)
  • Fiction (404)
    • Books (113)
    • Movies (77)
    • Television (123)
    • Writing (115)
      • Darkening Dream (62)
      • Untimed (37)
  • Food (1,764)
  • Games (101)
  • History (13)
  • Technology (21)
  • Uncategorized (16)

Recent Posts

  • Eating Naples – Palazzo Petrucci
  • Eating San Foca – Aura
  • Eating Otranto – ArborVitae
  • Eating Lecce – Gimmi
  • Eating Lecce – Varius
  • Eating Lecce – Duo
  • Eating Lecce – Doppiozero
  • Eating Torre Canne – Autentico
  • Eating Torre Canne – Beach
  • Eating Monopoli – Orto

Favorite Posts

  • I, Author
  • My Novels
  • The Darkening Dream
  • Sample Chapters
  • Untimed
  • Making Crash Bandicoot
  • My Gaming Career
  • Getting a job designing video games
  • Getting a job programming video games
  • Buffy the Vampire Slayer
  • A Game of Thrones
  • 27 Courses of Truffles
  • Ultimate Pizza
  • Eating Italy
  • LA Sushi
  • Foodie Club

Archives

  • May 2025 (3)
  • April 2025 (4)
  • February 2025 (5)
  • January 2025 (3)
  • December 2024 (13)
  • November 2024 (14)
  • October 2024 (14)
  • September 2024 (15)
  • August 2024 (13)
  • July 2024 (15)
  • June 2024 (14)
  • May 2024 (15)
  • April 2024 (13)
  • March 2024 (9)
  • February 2024 (7)
  • January 2024 (9)
  • December 2023 (8)
  • November 2023 (14)
  • October 2023 (13)
  • September 2023 (9)
  • August 2023 (15)
  • July 2023 (13)
  • June 2023 (14)
  • May 2023 (15)
  • April 2023 (14)
  • March 2023 (12)
  • February 2023 (11)
  • January 2023 (14)
  • December 2022 (11)
  • November 2022 (13)
  • October 2022 (14)
  • September 2022 (14)
  • August 2022 (12)
  • July 2022 (9)
  • June 2022 (6)
  • May 2022 (8)
  • April 2022 (5)
  • March 2022 (4)
  • February 2022 (2)
  • January 2022 (8)
  • December 2021 (6)
  • November 2021 (6)
  • October 2021 (8)
  • September 2021 (4)
  • August 2021 (5)
  • July 2021 (2)
  • June 2021 (3)
  • January 2021 (1)
  • December 2020 (1)
  • September 2020 (1)
  • August 2020 (1)
  • April 2020 (11)
  • March 2020 (15)
  • February 2020 (13)
  • January 2020 (14)
  • December 2019 (13)
  • November 2019 (12)
  • October 2019 (14)
  • September 2019 (14)
  • August 2019 (13)
  • July 2019 (13)
  • June 2019 (14)
  • May 2019 (13)
  • April 2019 (10)
  • March 2019 (10)
  • February 2019 (11)
  • January 2019 (13)
  • December 2018 (14)
  • November 2018 (11)
  • October 2018 (15)
  • September 2018 (15)
  • August 2018 (15)
  • July 2018 (11)
  • June 2018 (14)
  • May 2018 (13)
  • April 2018 (13)
  • March 2018 (17)
  • February 2018 (12)
  • January 2018 (15)
  • December 2017 (15)
  • November 2017 (13)
  • October 2017 (16)
  • September 2017 (16)
  • August 2017 (16)
  • July 2017 (11)
  • June 2017 (13)
  • May 2017 (6)
  • March 2017 (3)
  • February 2017 (4)
  • January 2017 (7)
  • December 2016 (14)
  • November 2016 (11)
  • October 2016 (11)
  • September 2016 (12)
  • August 2016 (15)
  • July 2016 (13)
  • June 2016 (13)
  • May 2016 (13)
  • April 2016 (12)
  • March 2016 (13)
  • February 2016 (12)
  • January 2016 (13)
  • December 2015 (14)
  • November 2015 (14)
  • October 2015 (13)
  • September 2015 (13)
  • August 2015 (18)
  • July 2015 (16)
  • June 2015 (13)
  • May 2015 (13)
  • April 2015 (14)
  • March 2015 (15)
  • February 2015 (13)
  • January 2015 (13)
  • December 2014 (14)
  • November 2014 (13)
  • October 2014 (13)
  • September 2014 (12)
  • August 2014 (15)
  • July 2014 (13)
  • June 2014 (13)
  • May 2014 (14)
  • April 2014 (14)
  • March 2014 (10)
  • February 2014 (11)
  • January 2014 (13)
  • December 2013 (14)
  • November 2013 (13)
  • October 2013 (14)
  • September 2013 (12)
  • August 2013 (14)
  • July 2013 (10)
  • June 2013 (14)
  • May 2013 (14)
  • April 2013 (14)
  • March 2013 (15)
  • February 2013 (14)
  • January 2013 (13)
  • December 2012 (14)
  • November 2012 (16)
  • October 2012 (13)
  • September 2012 (14)
  • August 2012 (16)
  • July 2012 (12)
  • June 2012 (16)
  • May 2012 (21)
  • April 2012 (18)
  • March 2012 (20)
  • February 2012 (23)
  • January 2012 (31)
  • December 2011 (35)
  • November 2011 (33)
  • October 2011 (32)
  • September 2011 (29)
  • August 2011 (35)
  • July 2011 (33)
  • June 2011 (25)
  • May 2011 (31)
  • April 2011 (30)
  • March 2011 (34)
  • February 2011 (31)
  • January 2011 (33)
  • December 2010 (33)
  • November 2010 (39)
  • October 2010 (26)
All Things Andy Gavin
Copyright © 2025 All Rights Reserved
Programmed by Andy Gavin