Memory Management in Swift: A lost opportunity for Apple - Digital Leaves
483
post-template-default,single,single-post,postid-483,single-format-standard,ajax_fade,page_not_loaded,,select-theme-ver-3.8,wpb-js-composer js-comp-ver-5.1.1,vc_responsive

Memory Management in Swift: A lost opportunity for Apple

Swift was released to the public at the WWDC 2014 to general surprise. The language has since revolutionized the iOS/Mac OS X programming scene, due to its modern style and the numerous improvements over its predecessor, Objective-C. However, one of the big disappointments for me was the decision from Apple to stick with ARC (Automatic Reference Counting) for the memory management in Swift’s runtime.

I understand why Apple has done this. They want the code of Swift and Objective-C to be binary-compatible, avoid having to re-design and build a memory management system, and reduce the adoption hassle as much as they can, but I was expecting a brand new, modern solution for memory management in Cocoa, one that would not require extra work for the developer and would eliminate one of the main drawbacks that Objective-C has been carrying since the pre-ARC times.

Manual Retain Release

A long long time ago (well, not that much, actually ;), before ARC, Cocoa’s memory management was done by means of Manual Retain Release (MRR). In MRR, the developer would declare that an object must be kept in memory by claiming ownership on every object created, and relinquishing it when the object was not needed anymore. MRR implemented this ownership scheme by a reference counting system, in which each object would be assigned a counter indicating how many times it has been owned, increasing the counter by one, and decreasing it by one each time it was released. The object would then cease to exist when its reference counter reached zero.

The special method alloc was used to create an object and claim ownership over it, while retain was used to claim ownership on an already existing object. On the contrary, release was used to relinquish ownership on the object. There were another methods, like copy, that would create a copy of an existing object (claiming ownership on the newly created object) and autorelease, that was a release that deferred the object’s destruction until it was suitable (typically during the dealloc of the object). Your typical class instantiation would look something like:

This required a lot of extra work for the developer, specially when dealing with complex classes and interactions between objects, andwas the main cause of memory leaks and memory-related problems in the code. In my opinion, one of the main reasons Objective-C has always been seen as a difficult or unfriendly programming language, is its memory management system.

Enter Automatic Reference Counting

Automatic Reference Counting (ARC) was introduced with XCode 4.2, and was seen as wonderful news by the developers community. You no longer needed to remember to retain and release your objects, ARC would take care of that. You would just need to declare a property as “strong” (meaning that the object will be retained and its reference count increased) or “weak” (meaning that it won’t retain and extend the object’s lifecycle). The problem? ARC is prone to retain cycles, and has never been very good at taking care of memory cleaning.

A retain cycle is caused by two objects having strong references to each other. Think of a Book object that contains a collection of Page objects, and that every Page object has a property pointing to the book the page is contained in. When you release the variables that point to Book and Page, they still have strong references among them, so they won’t be released and their memory freed, even though there are no variables pointing to them.

retainCycleIn order to avoid this circumstance, the developer must be aware of retain cycles in the code and debug the apps (using Instruments, for example) in search of leaks and strong retain cycles. In Objective-C, this implies being careful about whether to define an object’s property as strong or weak.

Another problem of ARC is its indeterminacy. You have no control over when a variable is going to be released and its memory freed. In order to help the developer to handle the memory allocated for objects, ARC defines structures of code called “autorelease pools”. An autorelease pool is an area where you declare objects that would get allocated by ARC, and then, after the execution reaches the end of the autorelease pool, all allocated objects would (hopefully) be released. The problem is that ARC usually defines one main autorelease pool in the project, basically enclosing the main function. That means that, unless you take care of memory management, and define another autorelease pools, the memory of your program can grow without control, potentially reaching a very high memory imprint on the system. You can assign an object to nil, thus indicating ARC that you are releasing that object because you no longer need it, but you cannot really infer when (and if) ARC decides to actually free that object.

Apple has some guidelines regarding the use of autorelease pools to reduce the memory impact on loops with heavy creation/use of objects and memory, but in the end, I have the feeling when programming in Cocoa/Objective-C that I don’t really have control over the memory that my program is using.

What other languages do

When I was a heavy C/C++ programmer, one of my main complaints about the language was the memory management. Much has been written about malloc and free, and their misuse is actually after the vast majority of security bugs, vulnerabilities and nasty exploits found in software today. In C/C++, you are responsible for allocating and freeing the memory of your objects, and you must make sure to allocate exactly the amount of memory you need, and to avoid deallocating an object that’s going to be used later (causing a use-after-free vulnerability) or freeing a memory that’s already been freed before (causing a double-free vulnerability). However, in C/C++ you have full control on when and how your objects get deallocated, which allows you to fine-tune the memory usage of your application.

Other languages like Java, Python, or Ruby implement a mechanism called Garbage Collection to handle the memory management. The garbage collection mechanism usually implies having a reference counting mechanism similar to the one used by ARC, but implementing an additional cycle detection that allows the developer to forget about memory management and just focus on programming the application. In the end, isn’t it what a programming language is supposed to do?

Why Swift should drop ARC

In Swift, the problem with ARC is even worse because Swift is strongly typed and type safe, so all variables must have a known type and non-nil value (unless declared optional). That forces the developer to add some nasty (in my opinion) hacks that have nothing to do with the actual application code. Let’s take a look at the last use case of the memory management chapter from the book The Swift Programming Language. In this chapter, several scenarios where crossed strong references cause a retain cycle are considered, and the last one deals with two entities that have a reference to each other, none of them able to contain a nil value: a Country object that must have a capital City entity, and a City object belonging to a Country. Both of them must have valid values for their properties (capitalCity for a Country, and country for a City object), so City defines country as a unowned variable, and Country defines capitalCity as a non-nil optional variable (?).

Now this definition of classes seems quite forced and unnatural to me, and the concept of an optional variable (by definition: a variable that can contain a nil value) that must be non-nil is really hard for me to grab as a logical, useful entity. It is unfortunate because there are many situations in which a relationship between classes can result in a retain cycle, so I would have to get used to this scheme, but I find it really hard to understand and to justify.

I would find it easier to stick to a manual memory management in the C/C++ style. Sure, it is harder, but at least the allocation-release sequence makes sense from a logical point of view. Other languages like Java have included a modern garbage collection mechanism with cycle detection algorithms for years, and I wonder why Apple has decided to loose such an opportunity for changing a problem they have been carrying for years now.

 

 

24 Comments
  • Dhayanithi

    August 12, 2014 at 8:35 am Reply

    Catchy point in swift language ARC. Although this problem exists in ARC, hope GC technique can’t engulf the ARC. Because I believe ARC is more efficient and faster than GC. As I understood GC process happens in background by every fixed time interval or while memory shortage. Sometimes application results worse when GC process running background.

    • Ignacio Nieto Carvajal

      August 12, 2014 at 6:40 pm Reply

      Yes, you are right, but don’t forget that ARC memory cleaning happens too “in background”, and in a indeterministic way, i.e: you cannot predict or guess when it’s going to happen, and I have no proof that ARC is more efficient or faster than GC. What I do know for sure is that GC memory management in languages such as Java is easier from a developer perspective than its ARC counterpart.

      • Dhayanithi

        August 13, 2014 at 5:04 am Reply

        Yes your are absolutely correct, in developer perspective GC is better. We have option to force the GC process whenever we require via code and some configuration settings avail to perform GC in a effective manner. But Im having different thought with respect to ARC. As far as I understood, ARC memory cleaning not happens “in BACKGROUND” and in deterministic way. It reclaims memory whenever object reaches its reference count to 0. The disadvantage in ARC is retain cycles problem as you explained(in awesome way). I experienced the both way of memory recycling in objective C and in .Net. So, I believe ARC would be better at-least for mobile applications.
        Please refer : http://www.elementswiki.com/en/Automatic_Reference_Counting_vs._Garbage_Collection,
        http://lists.apple.com/archives/objc-language/2011/Jun/msg00013.html.

        I always wish to reconsider my thought, If my belief went wrong.

        • Ignacio Nieto Carvajal

          August 13, 2014 at 6:58 am Reply

          You are right, in theory ARC tell us that objects get deallocated as soon as their reference count reaches zero, but I have my doubts, as my tests seem to indicate that the deallocation actually happens some time later, usually joined by other objects’ deallocation. Of course your point is still valid, as GC would involve a more “aggressive” memory cleaning that would eventually suppose a noticeable impact on the user’s main thread.

          By background I meant that, although ARC “injects” the deallocation code in the App’s executable (as opposed to GC working as an independent process), this deallocation is finally done by a separate thread working “in the background”, maybe the own object’s deallocation code being executed in a background thread (while the program’s main interaction keeps running in the main thread), so I am not sure if this is an advantage over GC or implies a more efficient process (a separate process/thread GC versus the injected cleaning code in ARC).

          Thanks for the links, they definitely shed some more light on this fascinating topic!

          • Dhayanithi

            August 13, 2014 at 9:08 am

            I believe that retain/release injection in ARC happens in compile time.

          • Ignacio Nieto Carvajal

            August 13, 2014 at 9:12 am

            Sure, it is injected in compile time, but gets executed in runtime. If you have an object “Person” that gets shown in, say, a UITableViewController, and you pop that UITableViewController from the navigation stack (get back o segue to another VC) the Person objects shown in the UITableView should get deallocated by that code inserted by ARC, but this code actually runs in “background” (I mean, not in the UX/UI main thread), whereas the main thread shows you the new/previous VC? At least, I think so.

  • Dhayanithi

    August 13, 2014 at 10:03 am Reply

    I believe that the retain/release code also would get execute along with our normal code. Earlier, developer followed MRR(Manual Release and Retain), by the time reference count retain and release happened in main thread itself. Now MRR is replaced by ARC (complier itself inject retain/release). The only difference between MRR/ARC is injection of retain/release code and rest of the process are same. After build process completes, binary file itself would have release/retain code to execute along with normal code. As far as I know, ARC works in main thread.

    • Ignacio Nieto Carvajal

      August 13, 2014 at 10:06 am Reply

      I see, and that definitely makes sense (as ARC just puts the MRR code for you). Thanks, Dhayanithi!

  • Rob Ryan

    September 17, 2014 at 3:41 am Reply

    You say “The problem is that ARC usually defines one main autorelease pool in the project, basically enclosing the main function. That means that, unless you take care of memory management, and define another autorelease pools, the memory of your program can grow without control, potentially reaching a very high memory imprint on the system. You can assign an object to nil, thus indicating ARC that you are releasing that object because you no longer need it, but you cannot really infer when (and if) ARC decides to actually free that object.”

    This is not true. Apps run with a run loop, and the autorelease pool is drained automatically every time you yield back to the run loop. And for dispatch/operation queues, they also have their own autorelease pools, and again the draining of the pool happens at entirely deterministic manner.

    The only time you need to define your own autorelease pools in Objective-C is when using NSThread or if you need to further refine the peak memory usage for some routine that is creating many autorelease objects within a single routine. But to describe ARC’s memory management as non-deterministic is, IMHO, inaccurate.

    What I would concede, though, is that while ARC made the behind-the-scenes memory management more opaque than I’d like (it’s entirely deterministic, but not always obvious unless you’re well versed in the method naming rules), Swift has further confused the topic by not elucidating (AFAIK) the precise mechanics the language employs. And the fact that our Swift code is likely to use both Cocoa objects (which follow autorelease conventions) and Swift objects (which do something else!), the situation is unnecessarily confusing.

    • Ignacio Nieto Carvajal

      September 17, 2014 at 7:28 am Reply

      Hi Rob,

      Thanks for commenting. You make some really good points here, but still, the fact that the end of an autorelease pool triggers the memory cleansing, doesn’t mean that ARC as a whole has a deterministic behavior. In ARC, an object is supposed to be deallocated when its reference count reaches zero. It does not need (in theory) to reach the end of the autorelease pool. Of course when the end of the release pool gets reached, the objects allocated inside will (again in theory) get deallocated, but in a normal situation of assigning “object = nil”, you cannot really tell when (and actually if) the memory will be freed.

      Besides, as you point out, there are other situations in which you need to use autorelease pools, like for reducing the memory peaks in a specially bloated for loop. It’s that kind of use what I was referring to in the article. There are many cases, however, in which you cannot rely on an autorelease pool, like an Application that is mainly composed by a set of different shared objects using the singleton pattern (so they are globally scoped and cannot be enclosed by the autorelease pool of a single thread) or in a server-like background application. In these cases, ARC is simply not smart enough for properly defining the lifecycle of objects.

      Of course, I still think (and my tests seem to back that up) that ARC behaves in an undeterministic manner. If it was the other way around, I will be certain that every time I assign nil to an object, and it has no more strong references by other objects, it will be deallocated instantly, which is not. That’s what I meant. Thanks again for your corrections and point of view.

  • Patrick

    October 11, 2014 at 9:07 am Reply

    Pure swift code only uses retain/release, no autorelease, as per these WWDC slides: http://bit.ly/1qbcAHc
    This should make it completely predictable in when deallocation happens, something you don’t get with GC.

    Also, in my understanding, GC does not typically do reference counting, but does graph analysis during collection. GC is triggered to do a collection by certain conditions (memory pressure, etc.) that are unfortunately typically unpredictable. At that point it completely halts the application (‘Stop the World’) to do a collection round – doing it on a separate thread is I think impossible because then things might change from under you. During collection it does relatively complex analysis to determine which objects are no longer used. I’m pretty sure this is significantly more computationally expensive than ARC, and it stops the entire application for a while which can be problematic in real-time applications.

    See this on Java GC: http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html

    • Ignacio Nieto Carvajal

      October 11, 2014 at 11:41 am Reply

      Yes, you’ve got some good points here, and I’m of course not saying that GC is predictable, but my tests indicate that ARC isn’t either, not at least in a real scenario.

      My point is that I would rather have one of two alternatives: a complete control over the memory allocation/deallocation as in C/C++, with all the hassle and bothering, but with the certainty that I have full control on how memory behaves, or a GC with automatic cycle detection that, even though is slower and worse from a performance point of view, allows me to write code without artifacts like strong/weak variables, weak self references or stuff like that, which that has nothing to do with the problem I’m solving with my App.

      I mean, a high level language is supposed to abstract you from that. If not… let me have full control and accept responsibility of my mistakes as in C/C++. With ARC, I have neither full control on memory behavior, nor full dedication to just problem-solving code.

  • Max Howell

    October 23, 2014 at 7:17 pm Reply

    Anyone who has developed for both Android and iPhone knows ARC beats GC. Unless you enjoy the whole app pausing every few seconds.

    • Ignacio Nieto Carvajal

      October 23, 2014 at 9:39 pm Reply

      I never said in the post that GC was “better” than ARC, maybe you would like to re-read it again. And I find “beats” a really vague and incorrect term to use here: beats as “easier for the developer”? (I don’t think so), less prone to mistakes? (Obviously not) Fastest performance? (Probably yes) Less likely to lead to retention cycles? (Don’t count on that)… Let’s be serious here, shall we?

      By the way, I don’t know of any Android App that “pauses” every few seconds due to GC. These kind of comments can be easily labelled as “fanboyism” and don’t really add much in the form of real arguments. I sure do like iOS “better” than android, but I think we should be critical and question our tools as developers to improve them, don’t you agree?

  • Daniel

    January 28, 2015 at 8:14 pm Reply

    I have a different problem with ARC that I don’t have with Java’s memory model. I’m running a bare-minimal single-file non-Xcode Swift app for practice (60 lines) that I’m running from the command line with “xcrun swift”. In deference to minimalism, I tried making the top level code simply:

    myGui().app.run()

    (“app” is lazily initialized to be a complete but minimal top-level window with appropriate delegates.) Upon return of myGui(), but before run() is called, ARC determines there are no references to the myGui instance, and runs its deallocation method! The process crashes quite nastily and immediately within the run method since all of its referents have been unceremoniously clobbered. This is just beyond bizarre: aren’t I obviously working with a field of this object at the moment?

    I *must* do:

    var m = myGui(); m.app.run()

    So that there’s a reference to the GUI object long enough! This seems incredibly aggressive. Java doesn’t force me to do this; an object, even though anonymous, that gets instantiated and then *does* things that run for a while, doesn’t get garbage collected until the function goes out of scope, and since this is top level, the anonymous myGui instance would never go out of scope. The code clearly hinges on this; it works fine with the change above.

    Thoughts?

    • Ignacio Nieto Carvajal

      January 29, 2015 at 8:24 am Reply

      Hi Daniel, thanks for sharing that with us. Please take into account that ARC doesn’t have a Garbage Collector like Java does, so memory works by using the retain/release scheme, which ARC includes automatically under the hood. Whereas in Java the variable is not released (g-collected?) until it has had the time to call run() and return, under ARC, if you are not effectively assigning the result to a variable, it’s not being retained and so it gets released.

      In your case, as you point out, the solution is as simple as assigning it to a variable. What worries me are the most complex examples in which the way we program gets actually influenced by the memory management scheme implemented by the language.

  • Jon Harrop

    February 4, 2016 at 1:05 am Reply

    The comments on this article are little more than a long list of memory management myths. Let me dispel some:

    “I believe ARC is more efficient and faster than GC”

    Reference counting is well known to be much slower than even a simple tracing GC (see http://flyingfrogblog.blogspot.co.uk/2011/01/boosts-sharedptr-up-to-10-slower-than.html). The reason is that incrementing and decrementing reference counts is expensive because it is often a cache miss. This is the main reason why both Java and .NET dropped reference counting garbage collection in favor of tracing garbage collection in their early days.

    “ARC memory cleaning not happens “in BACKGROUND” and in deterministic way”

    Thread safe reference counting is inherently non-deterministic because threads race to decrement to zero and the loser of the race is burdened with cleanup.

    “At that point it completely halts the application (‘Stop the World’) to do a collection round – doing it on a separate thread is I think impossible because then things might change from under you”

    That hasn’t been true since 1978. Fully concurrent, parallel and real-time tracing garbage collectors have been around for many years. If the GC on Android sucks then you can say that the GC on Android sucks but you cannot say that all GCs suck.

    I should also note that pause times with a simple incremental generational GC like the one in OCaml are at least bounded whereas pause times with reference counting are unbounded because decrements to zero can avalanche. Mathematica uses reference counting and often suffers from very long pauses while it recursively cleans up trees.

    “I’m pretty sure this is significantly more computationally expensive than ARC”

    Again, it is well known that tracing is faster than reference counting.

    “under ARC, if you are not effectively assigning the result to a variable, it’s not being retained and so it gets released”

    GC happens at run-time long after the compiler has eliminated all notions of variables. So assignment to a variable is irrelevant. There is still a reference to that object so the reference count of that object should never reach zero by definition. If Swift is doing this then it is a really serious bug in Swift.

    “Please take into account that ARC doesn’t have a Garbage Collector like Java does”

    Actually ARC is a garbage collector. See the standard monograph on the subject: the GC Handbook by Jones et al. You may also enjoy the paper “A Unified Theory of Garbage Collection” that presents reference counting and tracing as duals of each other. https://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf

  • Per Bull Holmen

    June 18, 2016 at 11:50 am Reply

    Chris Lattner has explained well why Swift decided not to opt for GC. You can read it here:

    https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html

    The main points are that many places relying on GC (non-ARC-type) isn’t acceptable, such as low-level systems code, boot-loaders, kernels etc., and also real-time systems.

    The second point is that GC’s tend to use large amounts of memory. I’m readily willing to believe that, seeing how Java uses memory in heaps and bounds whereas other languages use far less.

    I see Jon Harrop say that real-time tracing GC’s have existed for years, and I’m sure that’s true, but I’m guessing this might come with even more trade-offs, (like even more memory use?). I think, rather than a theoretical discussion, I’d like to see a real GC, in a real multipurpose language which does not use too much memory, rather than some scientific paper or study about something that could be made.

    Interoperability with plain C is also mentioned, which is still a necessity for a lot of real usages. I have heard mixing Java and plain C isn’t fun, and I guess it might also have something to do with GC.

    Personally, I think ARC is a good trade-off, between the convenience of a “regular” GC and the low-level control of manual memory management.

    Regarding the discussion about deterministic behaviour….
    If the ARC behaviour in Swift isn’t completely changed from the Objective-C days, then deallocation happen like this:

    Locally scoped references get released immediately when they go out of scope, and if there are no other references to the same object, it’s deallocated immediately.

    Other references get autoreleased. The autorelease pool is drained at the end of the Run Loop. Then it is released, and if there are no other references to it, it gets deallocated. This means, in practice, that it is deallocated when your application is finished handling the current event. If the last reference to an object is removed in a different thread, then it is deallocated when the autorelease pool of that thread is drained, which is up to the programmer to manage.

    You mention an example of an application made up of singletons, which supposedly introduces indeterministic behaviour, but this shouldn’t matter. The only thing that should matter, is what thread the code which removes the last reference, is called from. If you created the thread, it’s up to you, if it’s on the main thread, it happens after the current event is handled by the application. Note that when the last reference to an object is removed, the object is no longer visible to other threads, so this shouldn’t cause any conflicts.

  • Per Bull Holmen

    June 18, 2016 at 1:40 pm Reply

    Hmmm, after writing that post I had to test this thing about autorelease behaviour. It seems for me that Swift does not use autorelease pools, and that therefore you can count on the object always being destructed immediately after the last reference is removed. If you have experienced otherwise, I think it might be because you actually have some references left that you didn’t consider. In other words, it is totally deterministic, the rules are simple, and Swift can be used with RAII, which wouldn’t be possible with “regular” GC.

    The one use case for autorelease pools in Objective-C was to allow returning a reference from a function, without keeping the reference. Therefore I added a function to test this in Swift, and it showed that also in this scenario, the object is destroyed immediately after all references to it are out of scope, or no longer needed. There’s no defer of the dealloc. Here’s the code:

    [CODE]

    import Foundation

    class RefCount {

    let data:Int

    init( data:Int ) {
    self.data = data
    }

    deinit {
    print( “I’m gone!” )
    }
    }

    class EventHandler : NSObject {

    var refList:[RefCount] = []

    @IBAction func addReference( sender:AnyObject? ) {
    print( “Will change ref count \(refList.count)” )

    if( refList.count == 0 ) {
    refList.append(RefCount( data:5 ))
    }
    else {
    refList.append(refList[0])
    }

    print( “Did change ref count \(refList.count)” )
    }

    @IBAction func removeReference( sender:AnyObject? ) {
    if( refList.count > 0 ) {
    print( “Will change ref count \(refList.count)” )

    refList.removeLast()

    print( “Did change ref count \(refList.count)” )
    }
    }

    func getRemovedReference() -> RefCount? {
    if( refList.count > 0 ) {
    return refList.removeLast()
    }
    return nil
    }

    }

    class Dependent : NSObject {

    @IBOutlet weak var dependency:EventHandler!

    @IBAction func handleRemovedReference( sender:AnyObject? ) {
    print( “Handling ref counted object” )
    if let data = dependency.getRemovedReference()?.data {
    print( “Data is \(data)” )
    }
    print( “Done handling “)
    }
    }

    [/CODE]

    Output, when removing last reference is:

    [CODE]

    Will change ref count 1
    I’m gone!
    Did change ref count 0

    [/CODE]

    So, the object doesn’t exist after the statement that removed the reference at all. Output when the reference is removed, then returned to another function is:

    [CODE]

    Handling ref counted object
    I’m gone!
    Data is 5
    Done handling

    [/CODE]

    As expected, totally deterministic. If you have any scenario where you think it’s NOT deterministic, please be very specific, and provide code snippets.

  • Observer

    August 21, 2016 at 12:24 pm Reply

    GC causes unpredictable lagging and stuttering here and there which significantly degrades user experience. They did everything right since the main goal of Swift is too develop GUI applications for their phones and computers.

  • Bob Jarvis

    January 20, 2017 at 7:09 pm Reply

    Reference counting? In the 21st century? Seriously???

    Come on. Garbage collection is a settled issue. There are a ton of algorithms which can be used, every last one of which is better than reference counting. And it is quite possible to write a garbage collector which doesn’t cause “unpredictable lagging and stuttering”.

    Come on. It’s the 21st century. Wake up and *don’t* smell the garbage…

    And while we’re here – can we stake the designers of any and all languages past, present, and future *except C* which base their syntax on C out in the hot sun atop a fire ant nest and pour honey on their unmentionables? Please – I’m suffering from curly-brace oversensitization and I’m told this is the only cure….

    • Benjiro

      January 23, 2017 at 1:26 pm Reply

      It took how many years for Go to bring there GC from 400ms pause times to sub 10ms? And still they are rewriting there GC for the 1.8/1.9 release. And that is with dedicated people focused on the issue.

      Its not because ARC its a old technique, that its bad. Even Go there GC algorithms are based on papers / work done in the 1970/1980`s.

  • Alex

    June 8, 2017 at 4:09 am Reply

    OK, I agree that a full GC is much nicer for the programmer. Still, there’s a couple things that I think you missed.

    1. Apple is targeting some relatively memory-constrained devices, like smart watches. Refcounting uses a lot less memory for bookkeeping. It’s possible to make GC go really fast if you have 2-10x extra RAM for scratch space. That’s common in a server, but not a watch.

    The Apple Watch today has the same RAM as the late high-end PowerMac G4. Remember Java applets in the early 2000’s? Not fun.

    2. Apple already had a full GC, in OS X 10.5. It never worked very well. Needing to interoperate with arbitrary C, Objective-C, and C++ libraries is a nearly impossible task.

    So while it’s disappointing we don’t have GC today, we can’t blame them for not trying. They tried. They couldn’t make it work. What we have today is something that works.

    Maybe in another 10 or 15 years, we’ll have enough memory everywhere, and have killed off all our old C code, that a full GC will be feasible on the Swift runtime. Don’t hold your breath, though.

    • Ignacio Nieto Carvajal

      June 9, 2017 at 2:25 pm Reply

      Hi there Alex, you make some good points here, and I can definitely agree with you in many of them. Thanks for commenting!

Post a Comment

Before you continue...

Hi there! I created the Digital Tips List, the newsletter for Swift and iOS developers, to share my knowledge with you.


It features exclusive weekly tutorials on Swift and iOS development, Swift code snippets and the Digital Tip of the week.


Besides, If you join the list you'll receive the eBook: "Your First iOS App", that will teach you how to build your first iOS App in less than an hour with no prior Swift or iOS knowledge.

I hate spam, and I promise I'll keep your email address safe.