Hacker News new | past | comments | ask | show | jobs | submit login
Behind enemy lines: 3 months as an iOS developer at Google (splinter.com.au)
200 points by chubs on Jan 3, 2013 | hide | past | favorite | 154 comments



I've had to resolve merge conflicts with pbxproj files and xib files, and while it's not pretty, it's not a good enough reason to eschew interface builder. There's a reason that toolset is still around since the NeXTStep days: it's powerful. It gets a lot of giggles from people who are less experienced with Cocoa / iOS, and think that programmatically defined UI is for 'real' coders. In my experience, the amount of time you save wiring-up an app with interface builder is significant. With the editor assistant screen open, you can even drag UI element outlets directly to your code, and it will automatically insert your property declarations (and same with method actions). The idea that I'm going to grok someone else's UI code at a glance -- the same way I would looking at a visual document -- is absurd. Does interface builder fit every situation? Absolutely not. But for the vast majority of apps, you'll get your rapid prototype finished more quickly if you use the tools, and then you can tweak from there.


I stopped having merge conflicts with pbxproj files - ever - once I started making use of this tool in my projects:

https://github.com/WebKit/webkit/blob/master/Tools/Scripts/s...

It can be run while Xcode is open; it'll just reset the tree view. It basically alphabetizes everything, so there is one canonical ordering to items that you add to the project.


I second this.

When you have a consistent, established framework for managing a part of your program that provides an easy to read, visual overview of that part of your program for new developers (or old developers who haven't touched the code in a while) and which can avoid thousands of lines of meticulous declarative code, you should have a better reason for avoiding it than pompously declaring that it's tricky to maintain computer generated XML in your version control system.

For version control and XIBs... you should keep them as decomposed as possible (XIB files support soft links to sub-controllers allowing parents and children to be in separate files) and once fully decomposed, you shouldn't need to merge XIBs -- just treat them as monolithic. If you find yourself needing to merge then you've most likely failed to decompose your XIB enough or you've got too many people performing overlapping roles.

If you need programmatic view adjustment, you can implement -layoutSubviews (iOS) or -resizeWithOldSuperviewSize: (Mac) you can handle programmatic reflow or other code-based tweaks if absolutely required

Additionally, since a XIB file is a cached, serialization format, every view instantiated from the XIB after the first one loads faster than constructing the view in code.


Am I the only one thinking that this contract was probably bound under an NDA, and that making such a post is generally bad practice leaving employment/a contract (regardless of whether there's an NDA).

I know nothing ultra-secret is given away, but if I were looking for a contractor, I'd be concerned that such a person might perform a post-mortem on my company after leaving as well.


For those wondering, there's nothing in his post that couldn't be found with some Googling. There's info in here that probably crosses the line of what shouldn't be made public, but at least he didn't go to the extent of using the internal codenames.


Maybe it'd be good for your company to shine in a post-mortem when talking about development practices.

Secrecy is stupid for these things.


I'll second this. I think such a post from a contractor is inappropriate, unless he cleared it with some Google authority, which I doubt he has.


What confidential information did he reveal?


I don't know. But reading his post it's obvious he discusses a lot of details of the internal development methodology in Google which seems like something you should ask for approval to publish.


You don't know that anything he said is confidential. You don't know that he didn't get approval for this, which you say he "should" do clearly without knowing anything about Google's policies. I'm also assuming you don't know what his NDA specifies. How can I construe this to not have you reaching a premature conclusion with too little info?


I did say "... unless he cleared it with some Google authority, which I doubt he has."

So what are you specifically objecting? My saying that I doubt he cleared it with Google? I'm entitled to my doubts, thank you :-)


It strikes me as being officious to judge a person's actions as "inappropriate" without having the proper context (what their NDA says, what Google considers confidential, etc).


Honestly, I've heard the majority of it before, and I have never been a Google employee. I have, however, listened to some of their talks on the internal workflows and processes that they've presented and posted on youtube.

I think the only things I didn't hear before were some of the specific toolnames.

edit: http://google-engtools.blogspot.com/ here's their engineering tools blog.

http://www.youtube.com/watch?v=2qv3fcXW1mg and a talk

http://www.youtube.com/watch?v=b52aXZ2yi08 and another talk

Really cool stuff.


At the very least, most of the tools mentioned (gyp, KIF, etc) are used in other open source projects like Chromium and are not Google internal.


"Anyway, yes, there is a bit of anti-iOS sentiment there, you’ll be constantly teased about Obj-C’s strange syntax..."

Ah, yes, and Java and C++ are all flowers and unicorns flying around.


Yeah I was struck by that line too.

After 10 years as a C++ and Java developer, and 3 years of Objective-C, Objective-C is by far my preferred language (although it's an absolute must to use with an auto-completing IDE due to its wordiness).


I think it's mostly Obj-C's syntax is a bit of an oddity if you're a C/C++/Java/Python/Go (I think that's the main flavours in Google) developer who's never seen it before, but after spending some time in it it does make a lot of sense how it's been approached.

Plus isn't syntax mocking the same as editor mocking? Expected and pointless.


expected, pointless and not really taken seriously by anyone involved. it's just a fun social convention to identify with a team and mock the other team and it's adherents.


I went from SmallTalk to Java/C/C++ in the mid-nineties. Obj-C is both a relief and a constant source of frustration.


I'd expect Smalltalk programmers to find Obj-C syntax much more natural than Java, C#, or C++.


yes, but I get reminded every day how it is not SmallTalk.

e.g. the primitive vs. object distinction, the lack of closures (gcd is a poor replacement) and the way they allow a rethink of control flow, the neat things you can do because the stack is an object (restart exceptions with values! coroutines!)

I'm fine with the compromise, I just wish it was different.

I had hopes that MacRuby would turn into a supported systems programming language, but it's not to be.


It probably stems from the fact that few people learn Objective C first, where as most new programmers are started on Java, C, or C++. So Obj-C just looks strange because it's not what they're used to.


I thought it was weird at first too, but after learning Objective-c I can say that it is beautiful.


Umm, nary an iPhone to be found? Most people I know at Google have an iOS product of some sort. Maybe it is different in Google Australia vs Mountain View, but Googlers aren't shy about Apple HW. I would say 50% of Googlers carry MacBooks.


The standard dev setup is a Linux desktop and a Mac laptop (Macbook Pro / Air) or a ThinkPad. So unsurprisingly, you'll see a lot of Mac laptops, but not necessarily iPhones. Also, a Macbook doesn't run iOS. The article author is being more accurate with his terminology than you are, which might lead to the confusion.


I work on Mountain View, and several people on my team have iPhones. The reason is that we have an oncall rotation, and nobody wants to miss a page because their phone kernel panicked or "PagerService has stopped" during their shift. Plus, the iPhone batteries last longer.

I suspect the author's experience might be because he worked in an office where teams are located in closer physical proximity. It can be socially difficult to use an iPhone when the local Android team sits just down the hall.


> phone kernel panicked or "PagerService has stopped"

Do Googlers typically use experimental Android builds?

Googling for "android kernel panic" gives me pages about overclocked custom roms (and a MacOS X inspired game), and "PagerService has stopped" gives me exactly zero hits.


We do quite a bit of internal dogfooding when coming up to major releases - so yes, it's not uncommon to use an experimental build. You don't have to on a production phone, of course.


The reason is that we have an oncall rotation, and nobody wants to miss a page because their phone kernel panicked or "PagerService has stopped" during their shift.

Are you seriously trying to claim that there is a greater risk with a Nexus 4, for instance, of a pager service stopping or a kernel panic? Your post seems utterly ridiculous.


I have found my Android phone (HTC 1X) to be much more likely to have an app crap out and kill the battery by repeatedly polling location services. After removing app after app looking for the culprit it appears to have been Google Plus. It's recently been updated, so maybe they fixed it.

I never had this issue on my iPhone(s).


Self-respecting googlers aren't using sense or touchwiz junk. They're rocking a Nexus phone or running something like CM.


I have friends with Nexus phones that end up with similar issues.


Meh, I have a nexus and the wife has an iphone. They both have had their fair share of problems. I think its a little crazy to say, in 2013, that the android line is this unstable phone and if you're a sysadmin on call then you must have an iphone because its so super reliable. I'm a sysadmin, I do fine with android, thanks.


Nowhere did I say the iPhone was perfect or the Android was horrible. I was commenting on the battery life. My only phone right now is an Android phone and I enjoy using it. Because of the iPhones locked down nature and limit on background processes it is much less likely to be killed by a rogue process draining the battery. This has been my experience and others I know with other Android phones (including the Nexus).

Regardless of which device you like to use, ignoring its faults doesn't help them get fixed.


See my reply to his sibling post; it's common to dogfood experimental builds, which I assume is what he meant rather than seeing that kind of thing in production builds.


Possibly, but if the option is dogfood experimental builds or use an iPhone, maybe they should not force the dogfooding of experimental builds?

Though I don't think that was what they were saying. Their dismissive about Android tone (incl. the battery) means I suspect that they really are animous towards the product. Which might seem odd for a Googler, but actually it could easily be explained as someone who was rejected from that team, dislikes someone on that team, etc.


> Which might seem odd for a Googler, but actually it could easily be explained as someone who was rejected from that team, dislikes someone on that team, etc.

Seriously? It’s impossible for a Google employee to not like one of their products?

That would be dishonest, no need to ascribe it to malice or retaliation.

This should be true at every company. How can you improve your product if you think everything you do is just perfect and there is nothing to be improved?


It's absolutely possible if not probable that many Google employees prefer iOS or outright dislike Android. However the scenarios given were the worst sort of FUD that even Android detractors don't resort to (seriously, so anyone who actually needs to be contacted should avoid Android if we're to believe the claim).


It's not forced for all phones - it's only compulsory on prerelease devices. Presumably nobody would use such a device exclusively if they're on an urgent on-call rotation. I didn't read that much animosity into the post personally...


My understanding is that Googlers get a choice between a Apple computer (primarily MacBook Pro but I think iMac's are a possibility) or a computer running their customized version of Ubuntu.

You can get a Windows machine but you need to jump through some hoops and paperwork to get it.


FWIW, I do see the occasional iPhone at Google - but not many, of course.

And no, the Nexus 4 wasn't an option for the Xmas present this year. Not totally surprising given the obvious trouble LG's having keeping up with demand.


Been to many a meeting at Google HQ in London and almost everyone I've met there owns an iPhone.


Which building? Ours definitely isn't "almost everyone" but I can imagine the proportion might be different in CSG for example.


Most of my visits were to CSG. Granted the skillset in CSG is more sales & advertising as opposed to tech but still, it was quite funny to see.

Completely unrelated: I'm in love with the carpeting in the London CSG office. I've never felt more compelled to take my shoes off in my life.


I guess it makes a big difference that quite a few of us are actually on the Android team specifically. I know exactly what you mean about the carpets - a bunch of us went over a couple of times for beers, and we thought exactly the same :)


Most people still using a Galaxy Nexus there?


No, mostly Nexus 4's, but they're generally prerelease devices - or I guess a few are bought separately. Still quite a few GNex's around of course.


Last they got Galaxy Notes in Beijing, I purchased mine to a Googler with small hands.


In the NYC office, yeah, I mostly see the Galaxy Nexus.


Everyone has a task list to work from, and for each task you ‘branch off’ (kindof, they use a customised version of an obscure source control system, but i’ll translate it into Git parlance).

Is Perforce really that obscure?


To be fair, Perforce tends to be adopted most often in the "big company, with really big codebase" use case. That's not typically the norm for IOS developers.


I have friends who are barely aware of git, so I'm sure there are people out there.


I'm not sure why you're being downvoted--the only reason I'd heard of Perforce was from the games industry in certain places.


Ironically, the friend I'm thinking of works in the games industry as well. However, they're on the smaller side, so their code base can be handled by pretty much anything.


It's widely used in video-games development. All metadata is kept on the server (unlike others like svn/cvs). The GUI tools are easy to use (p4win and p4v for all platforms). The thing just works. I love playing with git/hg/fossil/etc., but when comes to huge binary files (psd, tga, tiff, fbx) p4 is the king.


Microsoft also uses some kind of perforce based client (they bought/licensed some source code version of it many years ago).


I used p4merge for every VSS I use! git/svn/TFS (don't judge me!) and it's fantastic. Just look at any screenshot https://www.google.com/search?tbm=isch&q=p4merge


As an Android developer who's required to learn iOS just to port one of my projects, I do find Objective-C's syntax to be strange--quite sloppy, in fact. I tried to list down the things I don't like about the language while going through my reference, and here are some of them so far:

- Defining the method signatures in a header file or an @interface section, then repeating the same method signatures just to implement them in @implementation. I just think it's an awful waste of effort.

- The idea of categories. The way I see it, there doesn't seem to be a reason not to subclass since categories won't make sense anyway if you don't include the original class in #import.

- @optional directive in protocols. I think of protocols as Java interfaces, and I assume that's the intended purpose for having them, so having an @optional directive seems quite pointless to me.

- Obscure variable scoping. I find myself having to memorize too many visibility rules--ones that apply to the objects themselves (what attributes they inherited, local/block scoping, etc.), ones that apply to files themselves (defining instance variables within @interface, global BUT in-file variables within @implementation, the extern keyword, etc.). This is much simpler and elegantly done in Java where you don't need to switch between thinking of your program as a bunch of interacting objects and as a bunch of files.

- Which makes me think that Objective-C just doesn't have good OOP in the first place.

- Conditional compilation. When is this ever useful? I just can't visualize having to write this (maybe for games?), but again, I never had to do this in Java.


There is more to OO than Java. :) ObjC has some syntax warts, due to being an extension of C like C++, but it is far better thought out than C++, IMO.

The method signature issue is a direct result of the C history. Far easier (at the time) to tell the compiler in advance about signatures than to force it to compile everything twice to find them (which is, roughly speaking, what Java does, except the second "compilation" happens at run time).

ObjC is a truer OO, IMO, because it focuses more on objects and message passing rather than method calls. As Categories and optional protocols show, this provides a far more flexible approach.

The conditional compilation comment confuses me and makes me think you don't really understand what's happening under the hood. When you compile Java, you get byte code tat requires a VM. When you compile ObjC, you get native code. Now imagine dealing with an architecture as different as PowerPC and x86. All those differences were conditionally compiled into your VM. In ObjC, you need to deal with them yourself (and all the other positives and negatives that implies).


It's amusing because the opposites of some of these are things that bother me about Java. Having everything in one file? Messy. Not having categories? Sad. @optional is nice if you want to be able to degrade to a default case. Scoping is primarily just C so YMMV.

ObjC is a purer form of OOP than Java is.

Not to mention Java has it's own quirks in my books: no unsigned variables, no typedefs, calling a function on a null pointer creates an exception instead of returning null (what's up with that?), etc. And don't get me started with how Android uses xml (poorly).

Really with languages I think it's primarily what you're used to.


I'm mainly an iOS guy but the way Android manages layouts and resources in XML beats the pants off of xibs + interface builder for non-trivial UIs.


You've gotta be kidding. What user interface tool does android + eclipse provide? There is nothing there, you've gotta hand roll it in xml. The UI tool is limited and a pain to work with.

I think android's ability to use folders to support different layouts and resources beats the file naming conventions necessary for supporting the same scenario in iOS... i.e. myLogo~ipad.png


I leave the WYSIWYG for the designer - it quickly falls apart for actually programming complex UIs. My job is to translate PSDs into working UI. There's a good reason we abandoned WYSIWYG for web design years ago and a modern mobile app is no less dynamic.

I can change the underlying grid size, padding and item spacing for every screen in my entire app in one line. Have fun doing that in iOS.


Try the UI tool for Android on Intellij IDEA, much better.


I've had to use conditional compilation quite a few times. Often due to strange things like target architecture and things like that. I noticed my native iOS stuff runs lightning fast, though. So think like you are writing souped up C code. You have to put up with header files and macros and conditional compilation sometimes for different versions of the app or to run on simulator architecture vs. real, etc. but it is very efficient building and running.

Categories can be nice to avoid subclassing. Subclassing can become a nightmare when one controller extends another which extends another which extends the Google Analytics one. There's so much hidden behavior and you can't even easily disable the log spamming Google Analytics one easily. Not compiling in the init code gives you errors in the log all over.


The idea behind Categories is to add methods to an existing class. This means if you get back an NSString from an API call, for example, you can call a method from a category declared on NSString, without having to convert it to the instance of some subclass. This can be very convenient, and I can't think of anyway to do the equivalent in Java.

It's similar in spirit to "monkey patching" in Ruby and similarly dynamic languages (but won't allow you to replace existing methods, so not as dangerous).


> It's similar in spirit to "monkey patching" in Ruby and similarly dynamic languages (but won't allow you to replace existing methods, so not as dangerous).

My understanding is that you can create methods with the same signatures, but which one will end up getting called is completely undefined, making it a worse idea that it usually is.

It can be very handy thing though, I really like it.


Objective-C is much more object-oriented than Java, mainly due to the parts of the language you've chosen to ignore because they don't remind you of Java. See the section on "Blub" in http://paulgraham.com/avg.html

Conditional compilation is a terrible, terrible thing but for the opposite reason you think; it's just too useful for too many things and almost inevitably leads to shipping code that's never/barely been tested. Java left it out to protect WORA, but the only reason you can really (sort of) live without it in performance-critical apps is due to the JIT doing the same sort of things for you invisibly.


Conditional compilation is pretty widely used in C, so I guess Obj-C inherits it from there. The most obvious case is when your code is running on different architectures - consider #ifdef OSX / #ifdef iOS or similar. In Java you're theoretically writing for only one architecture, so I guess it'd be less useful.

I don't know Obj-C well at all, but no arguments on the rest :)


I'm coming around to the point of view that storyboards cause more problems than they solve but throwing out xibs entirely seems a little extreme. Are they really doing all their layout in code?


XIBs and programatic layout is the Vim vs Emacs of the iOS world.

There are a lot of pros and cons of both. I used to spend a not insignificant amount of time trying to moderate debates between iOS engineers who favoured one approach versus the other.

Doing all your layout in code isn't inherently bad, and there are lot of Apple written apps that do this (conversely, some of the newer iOS built-in apps do use NIBs). The main problem that I've found with layout entirely in code is that whilst it's fine for you, the sole developer, once you bring in more people onto the project you can have problems with getting people up to speed on what exactly is going on where.

Of course, the solution to this is to enforce strict coding standards over how to layout the views themselves in code, which Google clearly do. And as the article points out, resolving merge conflicts in code is somewhat more enjoyable than in nibs.

That said, just as programatic layout isn't inherently bad, neither is leveraging interface builder strategically. Here's a good example: iPhoto on iPad has a completely custom interface that's mainly laid out programatically, but certain key elements are actually composited together in IB. For example, the brushes that slide up when touching up photos are being brought in from NIBs, but animated and manipulated in code. Using the nib file to load in the images reduces the code without sacrificing understanding (or at least, that's Apple's argument. There's a fantastic WWDC 2012 session that covers how the iPhoto UI is put together in more detail).

The TLDR; - the only risk of programatic layout that I see if developers going 'off piste' and laying out in a non standard way. With the right coding standards you should be fine.


XIBs and programatic layout is the Vim vs Emacs of the iOS world.

To me that implies that one essentially makes the other redundant but I prefer to see them as two powerful tools in my box that each have their uses. I've always thought the people that insist on doing everything in xibs were a little weird but going too far the other way doesn't make sense to me either.


You're right, it's a pretty bad analogy! I was more going for the whole NIB/Code debate being something that's never going to be adequately resolved.


> There's a fantastic WWDC 2012 session that covers how the iPhoto UI is put together in more detail).

Do you happen to know the session name or number?


Sure, sorry - I should have put that in my original post. It's Session 243, iPhoto UI Progression and Design, with Randy Ubillos himself.


Probably https://developer.apple.com/videos/wwdc/2012/?id=243 I haven't watched it yet, but it's been on my to-watch list for a while now.


The session name is 'iPhoto for iOS: UI Progression and Animation Design'. It's under Essentials.


Some of Apple's apps do their layout in code because the necessary features weren't available in Interface Builder yet (think of things like collection views, etc).


Using code for layout has several advantages:

- easier version control and code merging

- you don't need Interface Builder to review the layout code

- code is easier to search, e.g. for the use of certain controls (Xcode doesn't support searching XIB files)

- code can be parameterized (you can e.g. use global constants for font sizes, colors and margins)

- layouts that follows certain rules (fixed heights and margins etc) are sometimes easier to build with code, especially when the number of visible controls is variable or some controls are only optionally displayed

- you can easily refactor aspects of the layout into reusable components (i.e. functions or classes)


I'm a big fan of declarative layouts and vastly prefer doing Android UIs to iOS UIs for this reason. But that's only because Android has a sane & powerful layout language. Obj-C is so hideously inflexible and verbose for this kind of thing that I prefer to stick to xibs despite their limitations.

Of course there will always be cases in iOS apps where layout in code makes more sense though.


AutoLayout (new in iOS 6 and OS X 10.7) is decidedly OK to use in code or in IB.


I'm of the "do both" camp. Personally, I like starting out the interface in .xib because of the wysiwyg style - I've been an Illustrator and Photoshop design guy for much longer than I've worn my iOS Dev Hat. By laying out areas of apps in subviews of the main view, then manipulating those in code, I get very easy content sizing (3.5" vs 4" screen) without using the springs/struts and the terrible iOS6 autosizing constraints, and animations couldn't be easier.

Just looking through my open projects right now, all of my ViewControllers have a buildUI method right under ViewDidLoad.


Glad to hear I'm not the only one that thinks that the new iOS 6 auto layout stuff is a train wreck. The WWDC sessions on the subject are like Onion parodies of normal tech talks.


I'm starting to prefer using code for layout, since it does make some things easier. When I inherit a large project, it's nice to be able to search & easily change stuff in code rather than having to tweak lots of xibs.


I essentially did this for my senior capstone at university, and it worked pretty well for my purposes. Granted, I was working mainly in data visualizations. I'd propose that doing so is actually not too bad of a thing if you're a) not rapidly prototyping, and b) need to manipulate the large majority of the UI in code anyways.

I haven't made many more iOS apps after that, so I can't really say that with lots of experience, but still. As long as you're mindful of how you structure your code, it's not extremely difficult to maintain.


There was a nice article on Reddit recently about why someone doesn't use nibs but the discussion in the comments may shed some light on your question.

http://www.reddit.com/r/programming/comments/15jjfi/why_i_do...


A relatively new to iOS engineer at my company does all his layout in code. Way more flexible.


Google are very strict about things like code style guides, so things like having incorrect spacing, or ivars that aren’t in alphabetical order, or lines wider than 80 chars, will all get picked up.

Is the part of instance variables having to be sorted alphabetically really true? I did a quick search but only found the C++ style guide, which says nothing like that.

It sounds absurd, instance variables should (imo) be grouped logically, not sorted by their names which are not very relevant when it comes to which belong together.


Did you read the part about how it helps dealloc? I can see how that would be nice, if you don't have too many instance variables then it wouldn't be a big deal.


No, I missed that part ... I guess it makes some kind of sense, but it seems weird to assume that all instance variables need deallocation.

If I have a buffer, and a length, I'd like to keep them together in the code since they both are part of the same thing. The length, however, is likely some integer type that won't need mentioning when destroying the instance.


But, if you are forced to name alphabetically you might start naming those related instance variables in a related way (which wouldn't necessarily be bad). For example in your case you might have: myFileBuffer and myFileBufferLength which would hopefully be next to each other, no?


It sounds absurd, instance variables should (imo) be grouped logically, not sorted by their names which are not very relevant when it comes to which belong together.

Not to mention packing, which if you are compiling with enough warnings turned on, can get annoying. I like to use http://google-styleguide.googlecode.com/svn/trunk/cpplint/cp... on my code, but many of the warnings are more style preferences than anything (like where braces go). Still, I agree with most of their style guidelines, and it's nice having checks for things like unnecessary includes, missing includes and missing idempotency preprocessor guards.

Edit: BTW, yes, I'm talking primarily about C++; haven't done enough ObjC to know about packing there, so YMMV. As for dealloc arguments, again, I don't know how it's done in ObjC, but in C++ I think it's irresponsible these days to not be using something like shared_ptr<> if you are dealing with heap, thereby eliding the need to even handle deallocs.


Alphabetizing, or any canonical ordering, can help minimize merge conflicts. We require it for Java imports and lists all over, so I'm not surprised to see it here.

In some projects I've worked on, we allow logical groupings, provided comments that describe what they are, but then require alphabetizing within the groups.


Can't say whether it's true, but the purported reason was stated in the article (same sorting used for declaration and for deallocation, to make reviewing for memory leaks easier).


That's correct: here's the passage in question from the Google Objective-C style guide:

> dealloc should process instance variables in the same order the @interface declares them, so it is easier for a reviewer to verify.

One engineers idea of 'logical order' may not be another's, so alphabetical order seems as good a sorting method as any. And since the article itself doesn't link to the style guide, here it is:

http://google-styleguide.googlecode.com/svn/trunk/objcguide....

Google keep it regularly updated, so it includes newer developments like ARC and the modern literal syntax.


The style guide (in that link at least) doesn't actually enforce or even mention alphabetical order for ivars. It seems an unusually proscriptive restriction for what's quite a reasonable style guide. The stated restriction - dealloc in declaration order - makes a lot of sense.

(Also, line lengths merely have to 'try' to be less than 80 characters, so there's wiggle room here too.)


Note that the public style guides aren't always a 1:1 match with the internal ones, usually for legacy reasons or to be more consistent with other internal languages.

I don't see anything offhand in the internal ObjC guide about alphabetical order for ivars, so it's possible the OP misunderstood the "dealloc in declaration order" rule. It's also possible his specific project had a convention of sorting them alphabetically.


Also different teams have slight variations from the official style guide.


'Logical' may indeed be an insufficient description given that it lacks any inherent substance (logic can demonstrate something which is true or false).

Personally I generally organize by kind: model (including state), and then interface (views, etc), and then controllers (including helpers).


> Best case, your code is approved, you can then merge master into your branch and push your branch up to master.

Isn't this backwards? The code should be reviewed after I merged master in my branch, the merge can introduce any number of random issues and conflicts that may substantially change the code being submitted.

Besides, I think the 80-chars limit (especially in ObjC) is ridiculous, probably just put there by terminal diehards that make life harder for everyone else.


An 80 character limit is great if you like looking at multiple source files side-by-side.


I do that a lot and I found long lines annoying on an 11" MacBook Air, which has a 1366px horizontal resolution, and I use a fairly large font size at 12pt. I want to believe most developers use a larger display for their day to day work.

(bonch, you’ve been hit by the hellbanning BS: your comments are invisible)


The limit is flexible at the project level, many projects use a 100-char limit. And it's only for code, not for xml/json/string etc. files. And while it adds some extra line breaks to your code, it does improve readability a lot, particularly in visual diff views.


I imagine the with a code base the size of Google you code is constantly being added so you have no hope of merging with the master branch and getting the code reviewed before someone else changes it.


Not really - there are thousands of engineers and bazillions of files in the code base, but any single file you're working on is still unlikely to have hundreds of other engineers frantically changing it at the same time as you. The last company I worked at had 8 other engineers (at the most) and if anything I probably saw more merge conflicts there.


Even if multiple people change the same file at the same time, it does not necessarily require a manual merge, except when the changes conflict with each other (e.g. add a line at the same position in base and head branches), so the automatic merge does its job most of the times.


Definitely - I'm guessing you would need to submit for merge for review as well? Maybe someone from Google can clear it up. The merge would be pretty likely to break things like alphabetical ordering that seems to be a terrible no-no at Google. ;)


Normally those minor merge changes do not require another round of review, but it's up to the original author of the changelist to ask for it, in case the difference is substantial. Engineers are trusted to use their best judgement in such situations.


There's often some bounced revisions back and forth chatting about the code line comments, implementing the requested changes, and finishing the approval in similar systems I've used. You wouldn't want to merge master 5 times per change.


"Which explains things like android’s less-than-beautiful UI". They have improved their UI a lot in version 4 and above. I really like new artifacts they introduced such as ActionBars. You can see them implemented in GMail app and you can't label it "less-than-beautiful".


I think they've been going the wrong way lately, personally. I have trouble figuring out all the unlabeled icons myself whenever I use an app I don't use every day. Took me a while to even figure out what happened to the foursquare check in button when it became a small tiny unlabeled icon. I know I can tap and hold, but I've never seen any one else do that, never seen a user do that in a user study, and it's a pain in the ass doing it for a ton of glyphs in an interface. I'm more likely to just assume functionality doesn't exist then go tap and hold on half a dozen icons.

My users in apps I launch with the new style guidelines have horrendous times due to not noticing the main action is often an unlabeled icon with no chrome in the top right action bar, etc.. Often they flounder around in the app doing what they can tapping on content directly and never even try or notice the action bar icons.

Do flat, chromeless, unlabeled icons look good? Yes, but they are about as unusable as possible. The worst thing is that we are forced to follow Google guidelines if we want to get featured, and their design guide is a piece of crap not backed up by user studies. So we developers end up implementing all these stupid work arounds, like a tutorial overlay the first time any screen is shown with a big freaking white arrow and some text pointing to the otherwise unnoticeable corner icons...


Android looks so much better than iOS in 4.0+ (on Nexus devices at least) that I'm surprised to hear people still say this. It's at least close (though to me it's no competition as to which OS is more aesthetically pleasing).

That's not to say a lot of 3rd party apps on Android aren't horrendous looking, because that is true.


They have improved a lot under Matias, who clearly has more than just a very good design sense (the ability to get it done.): https://plus.google.com/114892667463719782631/posts


Perhaps it is because he spent more on the source control process that I feel like this portion of the piece was really lacking. I would have liked to have heard a little bit more about google's design process.


Luckily, perforce has a fantastic graphical merge tool, which i’ve now adapted into my post-Google Git tooling.

Anyknow know what he is referring to?

Edit: anyone know what Git tool he is referring to?



I much prefer Meld (http://meldmerge.org/) when I want a graphical tool, but usually just edit the conflicts in vim.


Out of interest, do you have a good way to compare two parts of the same file using vimdiff without creating two files? Say a huge xml file with two nearly identical <parts></parts>?


Yank the two parts into two new buffers (:vsplit) and do :diffthis in each


I've actually never used a visual diff tool. Usually I just search the file for ">>>>>>>>>".

Maybe I should check it out.


As someone who moved from perforce to VS/TFS2010 at work, the first thing everyone at the office demanded back was p4merge. It's a very good tool.

Fortunately, with VS2012 the built in merge-tools has become much better and allows you to do inline editing in the basic compare/merge-view as well. Since VS2012, I can't say I don't miss p4merge, but I miss it a lot less.


I've been happy with BeyondCompare. Reasonably priced, does 3-way merges, configurable with many version control systems. Now with Linux & Windows versions (but not a OS X version yet)


If you are into this stuff, Araxis Merge is also worthy of consideration. It does a better job of char-by-char diffs than p4merge, and will recalculate the diffs after you manually edit the result.

The (old-ish) version I used had only 3 panes, though, making p4merge superior for 3-way merges. p4merge was easier to use from the keyboard, too. But for ordinary merges, I much preferred Araxis.


I feel like only someone who's never used other graphical merge tools would call Perforce's merge tool "fantastic". It works nearly the same as every other graphical merge tool I've used. Hell, it works almost exactly the same as Filemerge, the hidden utility app that is installed when you install Xcode.


Sounds like p4merge.


Interesting to look at the product that he was working on: http://www.google.com/enterprise/mapsearth/products/coordina...

Based on the screenshots, it looks like something developed in-house for Google security guards. I wonder if Google can actually put the right kind of expertise and resources into selling these sort of specific IT solutions to non-technical organizations.


> Same with merge conflicts on .pbxproj files - these files weren’t checked into source control. Instead, there’s a Google open-source tool called GYP that generates your Xcode project from a JSON recipe and recursively searching the folders for source files.

Wow. That sounds crazy. I feel that there's an opportunity here to make a merge tool that would know how to handle .pbxproj files. The most common scenario I've encountered is when multiple people add files to the project - they get appended to a section of the .pbxproj file and consequently result in a (easily-solvable) conflict.

Part of me wonders why nobody at Apple has just gotten fed up with merge conflicts and solved the problem already.


Diffs for pbxproj files are also not especially readable, so just a merge tool wouldn't be enough to make collaboration easy.

Here's a recent diff of adding some files: http://trac.webkit.org/changeset/138661/trunk/Source/WebCore...

and the equivalent change in Gyp:

http://trac.webkit.org/changeset/138661/trunk/Source/WebCore...


Google has a similar situation for other languages, but rather than for merge issues, it is more a way to deal with the huge reams of libraries and software components and things like that available. GYP may do something similar. E.g. All code is in one codebase, but pulled out into a specific project by tooling, basically.


>> However at Google it was noticeable that designs aren’t really taken seriously. Which explains things like android’s less-than-beautiful UI, and google’s generally noticeable lack of focus on design.

This is something I have a hard time wrapping my head around. I am not a designer, and wouldn't even consider myself particularly good at it. However, there is no shortage of evidence to support the value and importance of good design. Therefore, I don't understand why any company or team culture would marginalize it.

Can anyone comment on the extent of this culture at Google or elsewhere? Specifically, why it exists and how it's propagated.


1. Many engineers are unable to recognize the merits of one design over another. But they have a finely tuned sense of engineering effort required to implement certain design features, so they literally see the cost of UX proposals without seeing the benefit. So they argue.

2. Many PMs believe that all design is subjective, and therefore it's perfectly reasonable to substitute their judgement for that of the UX'er. So they argue.

3. Google is a uniquely bottom-up culture, where consensus is required to move forward.

These three things lead to design by committee, and it is extremely painful.


Thank you. This was a very helpful response, and I appreciate you sharing it.


"Man is a creature who can get used to anything, and I believe that is the very best way of defining him."


"But, more likely, your code has some mistakes or style violations and needs to be fixed up. So the reviewer puts comments against the relevant lines in a web-based review system, ..."

is it rietveld? http://code.google.com/p/rietveld/source/browse/


Rietveld is a degoogled fork of it.


"Same with merge conflicts on .pbxproj files - these files weren’t checked into source control."

I use SourceGear DiffMerge as my default Git mergetool. Makes it pretty straightforward to merge .pbxproj files.


Good to know. I've merged both xib and pbxproj files and while it's possible it's not pretty.


Regardless of confidentiality issues, the post was definitely in poor taste. For example, he talks about "android’s less-than-beautiful UI". If I ever hired a contractor, and he posted something to that effect about one of my products, you can bet I'd never hire him again. It's extremely unprofessional, and a breach of trust. It's one thing talking about this stuff between friends, but another to badmouth your (former) employer to the public internet.


Haha, yeah, I worked with some ex-Googlers and experienced a similar review system. I often had multiple unrelated sections to work on while something was being approved, not to mention multiple iterations of the branch stuck in the approval bottleneck too. I guess other companies like Pivotal Labs get around this by pair programming, which I never thought of as an alternative to code review until now. Hmm.


Very interesting read! Thank you for posting!


I'm surprised Google chose to not use ARC for that project. Could anyone comment on why Google would use ARC for one thing and not another?


I wish i can work for Google, but hearing their horror interview made me really afraid.


What's it gonna cost you to apply? Maybe an hour to prepare the CV + cover letter. Take it from there and if you really want the job solve all the problems of Cracking The Coding Interview.


Interesting article. I was surprised the author ended it with "I only regret that i had to terminate my contract early due to out-of-my-control family reasons." Doesn't seem to me to be a great thing to have on your website for self promotion.


Why does having an unexpected family problem have to do with being a problem for future clients? Everyone has had bad things happen, it doesn't mean anything. If my mom died during my last project I don't think that would harm my chances of getting a next project.


As I have said in another comment: I originally misread the sentence as "out-of-control family reasons", missing the "my" - and the sentence is not as bad as I thought (or to put it another way, not necessarily bad at all).

Unfortunately I can't edit or delete the original comment.


Maybe his website is not about self promotion?


"If you're looking for a good iOS developer, drop me a line!"

Seems like he's at least partly using it for some self-promotion.


Without it there would be lots of comments that questioned the reason for early termination.


"Had to leave for personal reasons" would sound a lot better.


>Doesn't seem to me to be a great thing to have on your website for self promotion.

Yes, he should have lied about the fact, as a real professional would.


To be honest, I actually originally misread the sentence as "out-of-control family reasons", missing the "my" - and the sentence is not as bad as I thought (or to put it another way, not necessarily bad at all).

However, there is a huge difference between "lying", and choosing good phrasing (or even omitting unnecessary information), and your comment strikes me as very naive.


I'd rather hire the guy NOT omitting "unnecessary information".

And I would understand it too if a guy had to quit working for me due to "out-of-control family reasons". Let's be honest: everybody would do it if they had to. Imagine your child being heavily sick or something. Would you continue working on some project when you are needed thousand of miles (or lots of hours) elsewhere?

As for the people that would mind those kinds of things, I wouldn't want them hiring me either. For one, they could have made my life hell afterwards with lawyers and claims...


Posting just to bookmark it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: