<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[(r,d) => blog()]]></title><description><![CDATA[Xamarin, mobile, cross-platform]]></description><link>http://ryandavis.io/</link><generator>Ghost 0.5</generator><lastBuildDate>Tue, 07 Apr 2026 14:52:24 GMT</lastBuildDate><atom:link href="http://ryandavis.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[How Not to Translate a Videogame (2025 ver.)]]></title><description><![CDATA[<p>At the AI Builders Brisbane <a href="https://www.eventbrite.com.au/e/ai-builders-brisbane-for-ai-experts-ai-engineers-ai-agent-builders-tickets-1309525850069">July 2025 Meetup</a>, I gave a 2025 edition of my <em>'How Not to Translate a Videogame'</em> talk from <a href="https://ryandavis.io/how-not-to-translate-a-videogame/">2019</a>. The talk explored how recent advances in AI, especially agentic capabilities, improved the scope and quality of the outcomes possible from "hobby-level" effort into automated translation of a relatively obscure Japanese visual novel, '<a href="https://en.wikipedia.org/wiki/12Riven:_The_Psi-Climinal_of_Integral">12Riven</a>' - the final game in a <a href="https://en.wikipedia.org/wiki/Infinity_(video_game_series">series of four</a> that all otherwise have official or fan translations available. </p>

<h3 id="the2019approach">The 2019 approach</h3>

<p>The approach in 2019 relied on a pipeline of realtime text detection, recognition, and machine translation to produce a translated version of text from the game as it appeared. That translated text would then be displayed in a separate window. Even with the use of an Azure Custom Translate model (trained off the fan translation of an earlier game in the series), machine translation quality was variable, and the ergonomics of the whole setup were simply too awkward to apply for the entirety of a 30,000+ line visual novel.</p>

<p><img src="https://ryandavis.io/content/images/2025/07/2019.png" alt="">
<center><small><em>Technically, it worked. but it wasn't very good</em></small></center></p>

<h3 id="the2025approach">The 2025 approach</h3>

<p>In the session, we looked at how the use of Claude Code significantly improved the approach and outcomes in a second, "vibe translated" attempt. Agentic assistance greatly simplified the process of reverse engineering the game script and generating extraction/reinsertion and repacking utilities, enabling a translation to be applied directly to the game. The switch from machine translation of isolated lines to an llm-based, 'context-aware' translation of batches in sequence, including a multi-pass review process and automatically-maintained translation consistency guide, resulted in a (subjectively) greatly improved quality of translation. The development of a parallel agent framework enabled reliable, unattended, and high throughput translation of the script, in a manner that took advantage of the Claude Code Max subscription usage windows to perform an estimated $1,100 AUD of token usage at zero incremental cost. </p>

<p>Although much code was produced and executed in the process of reverse-engineering, extraction, translation and repacking of the game - no code was written by me. I noticed various suboptimal implementations as I reviewed some of the scripts, prompts, and intermediate outputs while pulling together the slides; but when vibing, "we run the code - we don't judge". The hard work was handled by my good friend Claude Code.</p>

<p><img src="https://ryandavis.io/content/images/2025/07/hal.png" alt="">
<center><small><em>Friendship ended with doing anything myself ever</em></small></center></p>

<p>Despite the fact that at first glance this appears to be an excellent and compelling approach towards making a Japanese game accessible to an English speaker, I have to acknowledge that I have no idea of how good or bad the translation output actually is, since I can't read Japanese to verify it. This ties in to a thought I have around cautioning the use of AI to do something you aren't able to do yourself. Beyond the risk involved in taking a dependency on a volatile third-party, it's worth considering that you are unlikely to be able to properly evaluate the quality, completeness, or robustness of an artefact related to a domain you do not understand. This is fine for hobby-level or personal projects, and I will happily use this translation to play the game in the knowledge that it might not be accurate. For anything professional, quite appropriately, a professional translator is called for. </p>

<p>A sample of the translated script in game can be seen in the video below. A few obvious issues, like broken speaker detection and lack of text breaking, were left unfixed to demonstrate that beyond replacing the script, there will typically be fixes required to get a nicely functioning fan translation. I reckon Claude and I can crack those without too much trouble though 😎</p>

<iframe width="570" height="380" src="https://www.youtube.com/embed/CA_q-nUS2E0" title="Translating a game automatically using an LLM" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

<p>Although it is not the right way to translate a videogame, this is adequate for my purposes. It will allow me to achieve one of my life's two dreams - completing the Infinity series - so that I may return my full focus to the other: becoming moderately good at the piano. </p>

<p>Overall, an excellent result.</p>

<p><em>(P.S. see the <a href="http://ryandavis.io/#issues">addendum</a> for progress on fixing issues)</em></p>

<p>Slides (63): <a href="https://ryandavis.io/content/images/2025/07/How_Not_to_Translate_a_Videogame_-_Ryan_Davis_-_20250716.pdf">PDF</a>  </p>

<table>  
<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/03/slides/Slide40.PNG" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2019/03/slides/Slide6.PNG" alt="">
</td>  
</tr>  
<tr>  
<td>  
<img src="http://ryandavis.io/content/images/2025/07/Slide28.png" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2025/07/Slide37.png" alt="">
</td>  
</tr>

<tr>  
<td>  
<img src="http://ryandavis.io/content/images/2025/07/Slide46.png" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2025/07/Slide50.png" alt="">
</td>  
</tr>  
</table>

<p><a name="issues"></a></p>

<h3 id="addendumfixingissues">Addendum - Fixing Issues</h3>

<p>Since the talk, Claude and I have started tackling the remaining issues with the translation. I'll keep this section up to date with progress.</p>

<h4 id="speakerdetection">Speaker detection</h4>

<p>As you'll notice in the video, "speaker detection" is broken in the original translation. The speaker indication appears to be part of script messages (i.e. is not controlled by opcodes/script logic), and is included at the start of messages by placing the speaker name within special brackets. For example:</p>

<blockquote>
  <p>【Renmaru】"No good here..."</p>
</blockquote>

<p>In our early translation attempt, these messages appear with the speaker indication inline in the messagebox. It's wrong, and a bit distracting!</p>

<p><img src="http://ryandavis.io/content/images/2025/07/speaker-not-detected.png" alt="">
<center><small><em>Yeah no good all right mate</em></small></center></p>

<p>My original hypothesis was that we were missing a space between the special end bracket and our quote mark, which was preventing the speaker name from being lifted to its rightful place. Of course, Claude agreed (it always does), and various fruitless experiments with characters and spacing yielded little success. Claude was convinced that our translation pipeline was breaking the detection by messing with characters, and unfortunately, I had to get my own hands dirty - operating a hex editor myself, like an animal - to prove that incorrect. </p>

<p>After demonstrating that speaker detection worked if we left original Japanese names in the script, I adjusted the hypothesis to be that there must be some reference data elsewhere in the game files, with speakers defined. Tasking out several parallel agents to find references to speaker names in non-core script files found that there was a long list of speaker names in the <code>DATA.BIN</code> file. As an aside, tasking out many parallel agents to search for patterns, or even to try performing the same reverse engineering work analysis on different files, is a really effective way to increase the scope of findings within a fixed period of time. It did not take too much more fiddling to implement a speaker reference data replacement step into our translation pipeline, and now speaker detection works as expected!</p>

<p><img src="http://ryandavis.io/content/images/2025/07/speaker-detected----.png" alt=""></p>

<p>Next up - text wrapping. To quote a timeless phrase:</p>

<blockquote>
  <p>This story is not yet at an end, the truth is not revealed - It is an infinity loop!</p>
</blockquote>]]></description><link>http://ryandavis.io/how-not-to-translate-a-videogame-2025-ver/</link><guid isPermaLink="false">c7eeffca-dd46-4a56-923f-c0bf41e079d3</guid><category><![CDATA[almost-famous]]></category><category><![CDATA[game]]></category><category><![CDATA[translation]]></category><category><![CDATA[12riven]]></category><category><![CDATA[infinity-series]]></category><category><![CDATA[ai]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Fri, 25 Jul 2025 23:51:18 GMT</pubDate></item><item><title><![CDATA[Improving dotnet iOS release build times on Apple Silicon]]></title><description><![CDATA[<p>In 2023, the life of a C# iOS developer is pretty good. We have apple silicon, and dotnet supports it. The legacy Xamarin toolchain is not arm64 friendly and probably never will be, but once you migrate to the new stuff, you'll find yourself in an all-arm64 development nirvana, where builds zip away silently, and the hot, noisy days of intel past are but a faint memory. </p>

<p>Everything is as it should be 🏝️💻 . . .</p>

<p>...</p>

<p>...</p>

<p>...</p>

<p>Or is it? In this post we'll learn how to identify and replace some of the pesky intel binaries that sit between us and a trip to <code>csrutil disable</code> to remove Rosetta for good^, and speed up iOS publishes along the way.</p>

<h3 id="analarmingdiscovery">an al-arm-ing discovery</h3>

<p>You can follow along if you're on an M1, or just take my word for it:</p>

<p>Open Activity Monitor, sort the process list by <code>Kind</code>. <br>
If you don't have the <code>Kind</code> column, you should 😤 <br>
<small>(you can turn it on by right-clicking the column headers)</small></p>

<p>Open Terminal and get yourself to a dotnet ios/maui project on your machine somewhere. <br>
If you don't have one, <code>dotnet new maui -o gathering_intel &amp;&amp; cd gathering_intel</code> will get you set up with a starter project</p>

<p>Then kick off a publish <br>
<code>dotnet publish -c:Release -f:net7.0-ios -r:ios-arm64 -p:EnableCodeSigning=false -v:n</code></p>

<p>Before long, you'll start to see the <code>mono-aot-cross</code> invocations fill the terminal. They start off like this:</p>

<p><code>Tool /usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/mono-aot-cross execution started with arguments: ...</code></p>

<p>Take note of the path to <code>mono-aot-cross</code> for later. Now switch back to Activity Monitor, and try not to audibly gasp.</p>

<p><center><img src="http://ryandavis.io/content/images/2023/03/tarnished.png" alt="" title=""></center><center><em><small><smaller>more like, "opt-out please" am i right 🤓</smaller></small></em></center></p>

<p>Just to be sure:</p>

<p><code>find /usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/ | grep cross/ios-arm64/ | xargs file | grep executable</code></p>

<pre><code>/usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/llc:                               Mach-O 64-bit executable x86_64
/usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/opt:                               Mach-O 64-bit executable x86_64
/usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/mono-aot-cross:                    Mach-O 64-bit executable x86_64
</code></pre>

<p>Yes it's true: even in our arm64 dotnet install, <code>mono-aot-cross</code>, <code>llc</code> and <code>opt</code> - the bits that handle AOT compilation and optimisation - currently ship as x86_64 binaries and are run under Rosetta. </p>

<h3 id="ehthisonlyaffectspublishingitsnobigdeal">eh, this only affects publishing, it's no big deal!</h3>

<p>That's fair - most of the time, we don't care <em>that</em> much about how long release builds take. Because of changes to the build approach in dotnet ios, or maybe just because of most everything else being arm64 on apple silicon, the development-time experience is pretty zippy (I still make heavy use of <a href="https://www.github.com/rdavisau/tbc">tbc</a> though). </p>

<p>But what about when you're gearing up for release and start to focus on bundle size, or things like startup performance? That's the thing: <strong>The only way to know the true impact of a change with respect to bundle size or performance is to perform a release build.</strong> So at some point in your project you might just find yourself in an 'inner-dev-loop' of release builds, and at that time, build times might matter. </p>

<p>I went in search and found that unsurprisingly, the dotnet team already identified this gap in the arm64 binaries, and <a href="https://github.com/dotnet/runtime/issues/74175">this issue</a> tracked it. The scope of that issue was eventually narrowed and the remainder (including macos arm64) is tracked <a href="https://github.com/dotnet/runtime/issues/82495">here</a>. That means that this should eventually get resolved, maybe even in a future net8 preview, and you could just wait for that. But what if you're doing size/performance optimisation work NOW? You'll have to get your hands dirty, but is possible to solve this for yourself (for some definition of solve).</p>

<h3 id="howmuchfasterisusingnativeaotbinariesoverrosetta">how much faster is using native AOT binaries  over rosetta</h3>

<p>So you can decide whether it's worth doing this, I've run some highly un-scientific speed tests. Here's a chart of my findings:</p>

<p><center><img src="http://ryandavis.io/content/images/2023/03/spdyy.png" alt="" title=""></center><center><em><small><smaller>measured once on one machine only - ymmv</smaller></small></em></center></p>

<p>I tried four projects - <code>dotnet new ios</code>, <code>dotnet new maui</code>, <code>eshop mobile client</code> (from <a href="https://github.com/dotnet-architecture/eshop-mobile-client">here</a>) and one of my own. I added <code>-clp:PerformanceSummary</code> to the publish invocation to get the timings for the <code>AOTCompile</code> task. </p>

<p>On my machine, the aot compilation time reduction ranged from 30-35% across the projects - let's call it a third. Apple says the M2 gives up to 20% faster CPU performance than the M1, so if like me you have an M1 and sometimes have irresponsible thoughts about an M2, this basically saves you six to eight thousand australian dollarydoos. </p>

<h3 id="howtohighlevel">how to (high level)</h3>

<p>We saw in the github issue that the dotnet team ran into issues doing this - how can we expect to be able to make it work? We can make it work because we have simpler goals. The dotnet team has to worry about pesky things like "passing build pipelines", "architectures other than arm64", "solutions that don't just work on one person's machine" and other realities of shipping an sdk and runtime. We don't need to concern ourselves with those kinds of hassles. </p>

<p>We just want to take our arm64 mac and produce arm64 aot compiler binaries that aot for ios-arm64, and then somehow have them be used by the build. For that, we can build our own out of <code>dotnet/runtime</code>, and then just overwrite the the intel binaries we originally got from official sources with our bootleg ones. What could possibly go wrong?</p>

<p>Just like back in the day when <a href="https://ryandavis.io/how-to-have-your-ios-13-preview-cake-and-emit-it-too/">we were rolling our own reflection-emit-enabled Xamarin.iOS versions</a>, it goes without saying that <strong>you should exercise caution when replacing core parts of the dotnet build pipeline with custom built tools</strong>. It's true that we are building off tagged ("blessed") commits, but the reality is that arm64 aot compiling binaries aren't officially produced right now and the use case may not have been through the same testing rigour that supported use cases have. I haven't had any issues (yet?), but <strong>it's probably best to limit use of these binaries use to the aforementioned 'inner-release-loop' scenarios only, and use the official binaries for builds you actually want to ship</strong>. No warranties provided, proceed at your own risk, etc. etc.</p>

<h3 id="howtoindetail">how to (in detail)</h3>

<p>With disclaimers out of the way, if you're still on board we're ready to start making a mess. With any luck, this process should only take 10-20 minutes.</p>

<p>First, clone <code>dotnet/runtime</code>: </p>

<p><code>git clone https://github.com/dotnet/runtime.git &amp;&amp; cd runtime</code></p>

<p>Then, check out the tag that matches the version of the sdk you're using to build. You can see it in the path of the aot invocation from earlier. In this post, the invocation was:</p>

<p><code>Tool /usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/mono-aot-cross execution started with arguments: ...</code></p>

<p>so we want <strong>7.0.3</strong>. In <code>dotnet/runtime</code>, the version tags are preceded by a 'v', so:</p>

<p><code>git checkout v7.0.3</code></p>

<p><em>(It's important to build off the tag matching the version of the dotnet sdk you're using. If not, you may run into errors due to differences between versions. For example, you can't build off the tip of <code>main</code>, which right now is .net8, then use the outputs with a .net7 sdk; things will go badly. An implication of this is that when you update dotnet, or if you have different projects pinned to different versions of dotnet, you'll likely need to need to follow these steps and build binaries for each of them individually. Basically let's just hope arm64 binaries start shipping soon)</em></p>

<p>Building this repo requires <a href="https://github.com/dotnet/runtime/blob/main/docs/workflow/building/coreclr/macos-instructions.md">certain dependencies</a> to be available on your system. You can run the below from the repo root (where you should already be) to get them, assuming you already have <a href="https://brew.sh/">Homebrew</a>:</p>

<p><code>brew bundle --no-lock --file eng/Brewfile</code></p>

<p>Ok, now we're ready to build things. There are flags you can pass to the runtime build script to isolate the build of the AOT cross compiler, but I didn't get great results with various combinations of these (either only some binaries came out arm64, or the build just didn't work - which is maybe what the updated issue tracks). So let's keep it simple:</p>

<p><code>./build.sh -s mono+libs -os ios -arch arm64 -c Release</code></p>

<p>This should take somewhere between 5-10 minutes, and complete without issues. It's pretty impressive really (go look in <code>artifacts</code> to see all the things we built with one command and no shenanigans).</p>

<p>Now make sure that we got what we wanted:</p>

<p><code>find . | grep cross/ios-arm64/ | xargs file</code></p>

<p>You should see:  </p>

<pre><code>./artifacts/bin/mono/iOS.arm64.Release/cross/ios-arm64/llc:            Mach-O 64-bit executable arm64
./artifacts/bin/mono/iOS.arm64.Release/cross/ios-arm64/opt:            Mach-O 64-bit executable arm64
./artifacts/bin/mono/iOS.arm64.Release/cross/ios-arm64/mono-aot-cross: Mach-O 64-bit executable arm64
</code></pre>

<p>Yes! arm64 all the things!</p>

<p>All that's left to do is to overwrite the official binaries with our own ones. Once again, the invocation from earlier tells us where these need to go. Just in case, let's keep a copy of the original bits around (also useful if you want to do comparisons). </p>

<p><em>(Remember to substitute the <code>7.0.3</code>s here and below for your version if necessary)</em></p>

<p><code>sudo cp -R /usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/ /usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/backup</code></p>

<p>That put all the original binaries under a subdirectory called <code>backup</code>. Now copy our new files over.</p>

<p><code>sudo cp artifacts/bin/mono/iOS.arm64.Release/cross/ios-arm64/* /usr/local/share/dotnet/packs/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64/7.0.3/Sdk/../tools/</code></p>

<p>And that's it! Let's run another publish and see how it goes.</p>

<p><center><img src="http://ryandavis.io/content/images/2023/03/thatsmorelikeit.png" alt="" title=""></center><center><em><small><smaller>zooom</smaller></small></em></center></p>

<p>Now we're cooking with charcoal. Enjoy your 33% faster builds! <br>
<small>^(Only one intel binary left!)</small><small> <br>
(it's <code>m l a u n c h</code>)</small></p>

<hr>

<h3 id="bonusthoughtsotherfactorsaffectingbuildtime">bonus thoughts: other factors affecting build time</h3>

<p>Switching from x64 to arm64 binaries is a nice 'free' build time improvement. There are a couple of other things that you can look at.</p>

<h5 id="linkingtrimming">💡 Linking/Trimming</h5>

<p>The less code you have, the less code needs to be AOT compiled. Using the linker will reduce the time spent in <code>AOTCompile</code> (and the output binary size). Some of it will be moved to the <code>ILLink</code> task, but the net effect should be a faster build and a happier user.</p>

<h5 id="dealingwithaotunfriendlyassemblies">💡 Dealing with AOT-unfriendly assemblies</h5>

<p>In the chart from earlier, "my app"'s AOT time went from ~120s to ~80s when switching to arm64 binaries. But when I first started looking at the build time, the non-arm64 AOT time was around 800s 🤯. Watching CPU usage and looking at build output made it clear - one assembly in the project took several times longer to AOT than all of the others. </p>

<p>The way the AOT step works is that the build system basically spawns an AOT process per assembly, for all assemblies at once, and lets the operating system manage their resource allocation. That's why in the screenshots of Activity Monitor in this post, you see a large number of processes using a small fraction of a cpu core - there are some 100+ processes trying to get their slice of 10 cores. Each of the AOT processes appears to operate on a single thread, which is fine in the beginning when there are more processes than cores and the cpu is oversubscribed. But if a single process takes much longer than the others, eventually it will be left running on it's own on a single thread, which is not very optimal. Scraping the output, I was able to see this behaviour in my own build (names removed to protect the innocent):</p>

<p><center><img src="http://ryandavis.io/content/images/2023/03/aottimes.png" alt="" title=""></center><center><em><small><smaller>one of these things is not like the other</smaller></small></em></center></p>

<p><em>(n.b. Because of the probably indeterminate nature of oversubscription, it's not truly fair to compare any of the specific numbers in the above diagram, but for general magnitudes we can use it)</em></p>

<p>Essentially, the AOT of one assembly is responsible for blowing out the build time by 10+minutes. In my case, the functionality being used in that library was something that could be replicated natively without too much hassle, so I switched to that and removed the assembly. Another option would have likely been to link aggressively on that assembly to remove more of the code causing the AOT work (my guess - heavy use of generics).</p>

<p><em>how did I produce this? Probably you can do something smart with binlogs, but I just scraped the msbuild output. First, log the build to a file by adding <code>-flp:v=diag -flp:logfile=mylog.log</code> to your build arguments. Then, use <a href="https://gist.github.com/rdavisau/c8009ddf01987c5a8b52eb0683614e7a">this gist</a> to process the file.</em></p>

<h5 id="optingtointerpretsomeassemblies">💡 Opting to interpret some assemblies</h5>

<p>This is more of a build size vs performance tip, and that should be your driving factor for this (not release build time), but I'll include it anyway. For a while now, we've had access to the interpreter option which <a href="https://ryandavis.io/practical-uses-for-the-mono-interpreter/">enables various scenarios</a>. In new dotnet ios, having it enabled is currently something you probably need to do because it is easy to unintentionally trigger code-gen (my theory is we had some special BCL assemblies in Xamarin days that avoided code-gen in certain methods but now that we share with dotnet you can hit it more easily). But <em>don't just enable it and interpret everything!</em></p>

<p>Don't: <em><small>(enable the interpreter and interpret all our assemblies)</small></em>  </p>

<pre><code>&lt;UseInterpreter&gt;true&lt;/UseInterpreter&gt;  
</code></pre>

<p>Do: <em><small>(enable the interpreter and interpret none of our assemblies, but be ready to interpret any codegen)</small></em>  </p>

<pre><code>&lt;UseInterpreter&gt;true&lt;/UseInterpreter&gt;  
&lt;MTouchInterpreter&gt;-all&lt;/MTouchInterpreter&gt;  
</code></pre>

<p>Doing the first one will skip AOTing everything, so I guess in the spirit of this blogpost it's going to make your release builds super fast and your outputs super small, but it's also going to make things much slower.</p>

<p>Consider: <em><small>(enable the interpreter and interpret specific assemblies)</small></em></p>

<pre><code>&lt;UseInterpreter&gt;true&lt;/UseInterpreter&gt;  
&lt;MTouchInterpreter&gt;-all,AssemblyToNotAOT1,AssemblyToNotAOT2&lt;/MTouchInterpreter&gt;  
</code></pre>

<p>Here we name <code>AssemblyToNotAOT1</code> and <code>AssemblyToNotAOT2</code> as assemblies that will be interpreted at run-time, so they won't be AOT-compiled at build time. This will reduce output size and release build time </p>

<h5 id="gettinganm2">💡 Getting an M2</h5>

<p>No no... just do some of the above 😎</p>

<hr>

<p>By combining the use of arm64-specific binaries and maybe a few build tips, hopefully you can see an improvement in your release-build times like I did, plus better battery life and an improvement in your general health and wellbeing. </p>

<p>Finally, everything as it should be 🏝️💻 . . .</p>]]></description><link>http://ryandavis.io/improving-dotnet-ios-release-build-times-on-apple-silicon/</link><guid isPermaLink="false">b2f8ad63-4d8d-4f70-996b-26eddb64e0d2</guid><category><![CDATA[xamarin]]></category><category><![CDATA[ios]]></category><category><![CDATA[code]]></category><category><![CDATA[dotnet]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Tue, 14 Mar 2023 22:08:42 GMT</pubDate></item><item><title><![CDATA[Using Custom AOT Profiles with Xamarin.Android]]></title><description><![CDATA[<p>For as long as I can remember, Xamarin.Android has given developers the ability to choose between using JIT and AOT compilation for their applications. JIT'ed builds are far smaller, but carry the overhead of runtime method compilation. AOT builds are much larger, but generally more performant - often by a substantial margin. An <a href="https://xamarinhelp.com/xamarin-android-aot-works/">old post by Adam Pedley</a> is still a good reference if you want to know more about AOT on Android. </p>

<p>In general we are performance sensitve on Android, so the cost of JIT'ing many methods being called at startup for the first time is undesirable. Unfortunately, depending on marketplace, we are also often binary size sensitive too, making the size increase incurred by using AOT compliation also undesirable. Ugh - we can't win! If only there were a middle ground - some way to get the best of both worlds. If we could balance some of the startup improvements of an AOT'd build with the binary leanness of a JIT'd build, that might be an ideal compromise.</p>

<p><img src="http://ryandavis.io/content/images/2020/02/the-perfect-apk-doesnt-exi--2.png" alt="" title=""><center><em><small><smaller>The perfect apk doesn't exis--</smaller></small></em></center></p>

<p>As you may know, a possibility of this nature has existed for a few months now in the form of a feature sometimes referred to as '<a href="https://devblogs.microsoft.com/xamarin/faster-startup-times-with-startup-tracing-on-android/">Startup Tracing</a>' and othertimes referred to as 'Profiled AOT'. At the time of introduction, these names were <em>technically</em> correct, but practically less so. Where ideally the feature would be tailored to your specific app, in these early versions the 'startup' that had been traced was not. Instead, it was that of a secret, generic, Xamarin.Forms-based app stored somewhere deep in Xamarin laboratories, forced to repeatedly start up and shut down while engineers meticulously documented the methods being called in order to decide what parts of an app should be AOT'd to improve startup times. </p>

<p>Well that's what I heard anyway.</p>

<p>With the release of <a href="https://devblogs.microsoft.com/xamarin/visual-studio-2019-version-16-5-preview-2/">Visual Studio 2019 16.5 Preview 2</a>, the Startup Tracing/Profiled AOT feature has been enhanced, and now allows developers to collect and use <em>their own</em> custom AOT profiles. This means that an AOT profile can be tailored specifically to an individual app - covering all the libraries and frameworks being used during startup or otherwise.  </p>

<h4 id="aotprofileshuh">AOT Profiles? Huh?</h4>

<p>So what's an AOT profile? Why is it called Profiled AOT? How is it different to full AOT?</p>

<p>Essentially, if you build a Xamarin.Android app using profiled AOT it comes out as a kind of hybrid JIT/AOT entity, containing both a mixture of ordinary .NET IL that will be JIT'd at runtime, plus some fully AOT'd code generated during the build process. Whether or not a given method in the app is AOT'd is determined by an AOT profile, which at its core is a list of methods that should be AOT'd. In theory, if a method isn't listed in the profile, it won't be AOT'd and will be JIT'd at runtime instead (although sometimes I saw results that suggest inputs other than the profile might influence whether something gets AOT'd). With that in mind, using profiled AOT makes it possible to selectively AOT-compile performance-sensitive parts of an app, allowing targeted size/performance tradeoffs and a 'best of both worlds experience'. Yes! </p>

<h4 id="whataboutstartuptracing">What about 'Startup Tracing'?</h4>

<p>Well (in my opinion) 'Profiled AOT' is the correct name for the feature just described - using an AOT profile to selectively AOT-compile parts of the app. Startup Tracing could reasonably refer to the process of recording method calls made during startup in order to produce an AOT profile tailored towards improved startup performance. Whilst this is the most likely use for profiled AOT on Android, it's worth mentioning that an AOT profile need not include only methods called during startup, and could conceivably include any combination of methods you think should be AOT'd because they are performance sensitive.</p>

<h4 id="gettingstartedwithprofiledaot">Getting started with Profiled AOT</h4>

<p>Making use of profiled AOT involves two distinct steps:</p>

<ol>
<li>Creating (or capturing) an AOT profile  </li>
<li>Use the AOT profile when building the app</li>
</ol>

<p>The steps are outlined briefly in the documentation <a href="https://github.com/xamarin/xamarin-android/blob/master/Documentation/guides/profiling.md#profiling-the-aot-compiler">here</a>, so this will be a more hand-holding version of that. Note that you'll need at least Visual Studio 2019 16.5 Preview 2 or VSMac 2019 8.5 Preview 2 to use this feature, and that the instructions here are for Windows but should be adaptable for macOS without too much trouble. </p>

<h5 id="step1capturinganaotprofile">Step 1: Capturing an AOT profile</h5>

<p>Given a general understanding of how your app starts up and the libraries it uses, it would be possible to try to create an AOT profile that covers startup by hand. However, doing so would be laborious, error prone and likely to contain both omissions or unneccessary inclusions. With VS 16.5 Preview 2, Xamarin.Android includes a new msbuild target that can build an app in profiling mode, allowing the app to <em>automatically</em> keep track of method invocations being made while the app is running. To take advantage of this target, you should have an Android device plugged in to your machine, and a VS2019 Preview Developer Command Prompt open to the directory in which your Android project lives. For later steps, it's useful to open the prompt as Administrator.</p>

<p>The name of the target for profiling is <code>BuildAndStartAotProfiling</code> and you can invoke it using the following syntax: </p>

<pre><code>msbuild /t:BuildAndStartAotProfiling  
</code></pre>

<p>You'll see a lot of ordinary build output and then a few new bits. The app will also be launched on the device.</p>

<p><img src="http://ryandavis.io/content/images/2020/02/start-profiling.png" alt=""></p>

<p>When built and launched via the <code>BuildAndStartAotProfiling</code> target, the app is automatically keeping track of method invocations, and (by default) is listening on port <code>9999</code> for something to connect and retrieve the logs. Although we are talking about startup, it's worth keeping in mind that when running under this mode the app records <em>all</em> invocations that occur, not just those on the startup path. This means that if you continue to interact with the app after it starts up, the methods that are invoked as a result of those interactions will also be captured in the logs, and will form part of the final AOT profile.</p>

<p>Once you're happy that the app has completed startup, you need a way to retrieve the profile data from the device. To do so, you need to keep the app running and use a second target, <code>FinishAotProfiling</code>, to connect and retrieve the logs.  </p>

<pre><code>msbuild /t:FinishAotProfiling MyApp.csproj  
</code></pre>

<p>That target appears to wrap a default invocation of <code>aprofutil</code> (on windows, lives in <code>C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\MSBuild\Xamarin\Android\</code>), and you'll get better diagnostics by invoking it directly:</p>

<pre><code class="language-`">aprofutil -s -v -f -p 9999 -o "custom.aprof"  
</code></pre>

<p><img src="http://ryandavis.io/content/images/2020/02/collect-profile-1.png" alt=""></p>

<p>One point to note with the <code>aprofutil</code> is that it appears to require Adminstrator privileges to execute correctly (that's why I said to open the prompt as Admin earlier). However, it also requires <code>adb</code> to be on the <code>PATH</code>, which generally isn't the case for the VS Developer Command prompts. Because I'm lazy, my workaround was to open an Android ADB prompt from Visual Studio <code>Tools -&gt; Android -&gt; Android ADB Prompt</code>, then add the <code>PATH</code> from that prompt to the end of the <code>PATH</code> in the running Admin prompt. Don't @ me. <!-- If you want to be lazy like me, you do it like this:</p>

<p><em>in the adb prompt</em></p>

<pre><code>echo %PATH%  
</code></pre>

<p>The adb path displays, and you can copy it to your clipboard.</p>

<p><em>in the vs developer prompt</em></p>

<pre><code>SET PATH=%PATH%;*paste the path from the adb command prompt here*  
</code></pre>

<p>The paths are now combined and probably include a lot of duplication, but it shouldn't be a problem. The change to <code>PATH</code> is scoped to the current terminal session, so you don't need to worry about undoing it later.</p>

<p>--></p>

<p>From the log you can see that a new file was written - <code>custom.aprof</code>.  Congratulations - you've now  generated a custom AOT profile!</p>

<h5 id="step2usingtheaotprofilewhenbuildingtheapp">Step 2: Using the AOT profile when building the app</h5>

<p>The UI support in Visual Studio for working with custom AOT profiles is not complete, so it's easiest to just edit your .csproj by hand. First, you need to add a reference to the custom profile with the appropriate item type:</p>

<pre><code>&lt;ItemGroup&gt;  
 &lt;AndroidAotProfile Include="$(MSBuildThisFileDirectory)custom.aprof" /&gt;
&lt;/ItemGroup&gt;  
</code></pre>

<p>Then, in the Release section of your csproj, add properties that instruct profiled AOT to be used, and for the default profile to not be used.</p>

<pre><code>&lt;AndroidEnableProfiledAot&gt;true&lt;/AndroidEnableProfiledAot&gt;  
&lt;AndroidUseDefaultAotProfile&gt;false&lt;/AndroidUseDefaultAotProfile&gt;  
</code></pre>

<p>If you're in a project that has previously used AOT or startup tracing, you may also need to remove other configuration elements related to those. </p>

<p>And that's it! Now, when you perform a Release build of your app, the custom AOT profile will be used to determine which methods to AOT. You'll see slightly different output in the build log demonstrating this:</p>

<p><img src="http://ryandavis.io/content/images/2020/02/aot-output.png" alt=""></p>

<p>When comparing the size of AOT content to an apk that has been fully AOT'd, one that uses the startup profile should be substantially smaller. It turns out the perfect apk does exist, and in the case of the PrismZero app, the size impact is quite low:</p>

<p><img src="http://ryandavis.io/content/images/2020/02/the-perfect-apk-does-exist-2.png" alt=""></p>

<p>Of course, given PrismZero is a demo app, your mileage may vary and you should run the numbers on your own apps.</p>

<h4 id="whatisaprofileanyway">What is a profile anyway?</h4>

<p>If you're curious, you can use the <code>aprofutil</code> to perform basic inspection of a custom profile:</p>

<pre><code>aprofutil -as custom.aprof  
</code></pre>

<p>This will print all (-a) kinds of entries in the profile - modules, types and methods to the console, then print the summary (-s) we saw earlier. There are a few other arguments that can be used to filter the output, but it's relatively unweildy. </p>

<p>It's also possible to programatically inspect the profile by referencing <code>Mono.Profiler.Log</code>    (on Windows, it lives at <code>C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\MSBuild\Xamarin\Android\Mono.Profiler.Log.dll</code>) and loading the profile using the <code>ReadAllData</code> method on the <code>ProfileReader</code> class. In this format, we can see it's a set of Module (essentially, 'assemblies'), Type and Method records:</p>

<p><img src="http://ryandavis.io/content/images/2020/02/profile-overview.PNG" alt=""></p>

<p>The modules, types and methods are listed in the order they are accessed, which can interesting information in itself. Because there are so many entries, I filtered most out to give an idea of what each looks like:</p>

<p><img src="http://ryandavis.io/content/images/2020/02/profile-contents-1.png" alt=""></p>

<p>As we can see from this filtered set of types, during the early stages of startup there are a lot of constructors (<code>ctor</code>) being invoked, as well as the important Prism app <code>CreateContainerExtension</code> method. From the chart in <a href="https://ryandavis.io/adventures-in-low-overhead-dependency-injection-using-dryioczero/">my earlier post on DryIocZero</a> (shown below), we saw that the container setup time (whether using zero or otherwise) improves substantially on Android under AOT, so it is beneficial for us to include it in the AOT profile.</p>

<p><img src="https://ryandavis.io/content/images/2020/01/prism-init-chart.png" alt=""></p>

<p>Of course, being specific to our app, methods related to container initialisation would not be AOT'd using the default Startup Tracing profile in earlier versions of Visual Studio. It's the ability to produce a profile specific to your app's startup and libraries that makes this new iteration of the feature so compelling.</p>

<p>If you noticed earlier, we used a <code>ProfileReader</code> to load the AOT profile data. There's also a <code>ProfileWriter</code> class that can be used to write a new or changed profile back. The <code>ProfileData</code> class is immutable, meaning you should create a new instance with a different set of data, but we can also remember that in .NET there is no spoon, and just alter an existing instance directly using reflection. </p>

<p>Modifying a profile can open up more advanced scenarios. For example, we could merge the contents of two profiles (e.g. a 'logged out' startup path and a 'logged in' startup path), or we could combine information from a startup trace with other convention-based AOT decisions (e.g. AOT all startup methods + all ContentPage <code>InitializeComponent</code> methods), if we thought that were appropriate from a size/performance tradeoff perspective. Whilst regenerating a startup trace during CI is more challenging because it requires the app to be launched, regenerating convention-based profile contents for a Forms-based app at build time is quite reasonable. Maybe an experiment for a rainy day.</p>

<h4 id="shouldiusethis">Should I use this?</h4>

<p>Unless you are extremely binary-size sensitive, profiled AOT really does look like a compelling feature to start using. It's relatively easy to set up, and should net some good performance improvements if you aren't using full AOT now, or a nice size decrease without a major performance impact if you were already using full AOT. Keep in mind that as an app evolves, the methods called during startup may change too, so regenerating a profile from time to time is a good idea. </p>

<p>Happy profiling!</p>]]></description><link>http://ryandavis.io/using-custom-aot-profiles-with-xamarin-android/</link><guid isPermaLink="false">a067eb8f-aa1c-45e0-9fbe-a1790fe56401</guid><category><![CDATA[xamarin]]></category><category><![CDATA[android]]></category><category><![CDATA[code]]></category><category><![CDATA[performance]]></category><category><![CDATA[aot]]></category><category><![CDATA[jit]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Thu, 06 Feb 2020 01:23:00 GMT</pubDate></item><item><title><![CDATA[Adventures in Low Overhead Dependency Injection using DryIocZero]]></title><description><![CDATA[<p><a href="https://en.wikipedia.org/wiki/Dependency_injection">Dependency Injection (DI)</a> might be one of the more polarising topics in the Xamarin community. In general, you're either on board with it and use it religiously - or - you're wrong ⁽ʲᵘˢᵗ ᵏᶦᵈᵈᶦⁿᵍ⁾. The price for the privilege of a DI container is some overhead - both at startup, when service registrations are made, and during runtime, as dependencies are resolved. In this post, I'll look a little bit at using a library called <a href="http://ryandavis.io/">DryIocZero</a> - a lesser known sibling of the popular DryIoC framework - to help minimise the overhead associated with the use of DI. I'll look a bit at what it's like using it in a cross-platform Xamarin app, and also demonstrate a PoC <a href="https://github.com/PrismLibrary/Prism">Prism</a> integration, which - whilst a little rough to work with - does appear to show potential for nontrivial performance improvements. </p>

<p><em>(This is a pretty long and dense post, so make sure you're comfy!)</em></p>

<h3 id="doesdicontainerchoiceevenmatter">Does DI container choice even matter?</h3>

<p>It does, because of the overhead that's involved. How much, and when it is incurred (registration/resolve), does vary from container to container, as demonstrated in <a href="https://www.palmmedia.de/Blog/2011/8/30/ioc-container-benchmark-performance-comparison">these benchmarks</a>. Note that these are times for 500,000 resolutions, so we should take care when drawing conclusions for our purposes. </p>

<p>Working in mobile - a comparatively performance-sensitive environment - Xamarin developers hopefully gravitate towards containers with lower overheads. I like <a href="https://github.com/dadhi/DryIoc">DryIoc</a>, and admire the commitment the author <a href="https://twitter.com/dadhi">@dadhi</a> places on keeping it light and fast, without compromising on features. </p>

<p>Inspired by <a href="https://twitter.com/clintrocksmith">Clinton Rocksmith</a>'s recent post on <a href="https://www.rocksmithtech.com/improve-xamarin-performance-with-lazy/">using <code>Lazy&lt;T&gt;</code> to defer subgraph resolution</a>, I decided to cross off something that's been on the list for a couple of years now - investigating <a href="https://github.com/dadhi/DryIoc/blob/master/Extensions.md#dryioczero">DryIocZero</a>, a sibling to DryIoC that allows you to create a container that incurs zero registration overhead at startup. </p>

<h3 id="zerodelcaloriesdelsmallersmallestssssmallestsmaller">Zero <del>calories</del> ʳᵉᵍᶦˢᵗʳᵃᵗᶦᵒⁿ ᵒᵛᵉʳʰᵉᵃᵈ? <smaller><smallest>ɴᴏ ᴡᴀʏ ᴛʜᴀᴛ's ᴘᴏssɪʙʟᴇ</smallest></smaller></h3>

<p>I mean, that's what they said about Coke Zero too. <br>
<small>(mum tells me there's a catch)</small></p>

<p>It does sound too good to be true. So what's the deal? From the <a href="https://github.com/dadhi/DryIoc/blob/master/Extensions.md#dryioczero">DryIoc extensions page</a>, DryIocZero is <em>"[a] slim IoC Container based on service factory delegates generated at compile-time by DryIoc"</em>.  </p>

<p>In short, you use the DryIoc library at compile time to generate code that describes a complete, configured and validated implementation of your container. The result is a class that you can <code>new</code> up that is immediately ready to resolve the roots you specified - there's no need to burn time during startup adding registrations, and no need for DryIoc to process types and understand the dependency graph or resolution method for these requests. A simple example should make the idea clearer:</p>

<p><img src="http://ryandavis.io/content/images/2020/01/basic-graph.png" alt=""></p>

<p>On the left is a basic dependency graph with two roots <code>A</code> and <code>B</code> (viewmodels, maybe) and a small number of interface dependencies. Below that, is some straightforward DryIocZero registration code - the same code you'd write when working with DryIoc normally. On the right is the relevant code that DryIocZero generates for that configuration, which I have tided a little for readability. </p>

<p>Effectively, each resolution root gets a dedicated and self-contained implementation that is scope aware, generated based on the container configuration. When a root needs to be resolved, <code>ResolveGenerated</code> calls the appropriate method and you get your requested object. As you can see, resolution implementations involve no interface indirection, and no reflection - both of these are desirable properties in a Xamarin world. The  <code>Container</code> class (which includes more code in a static partial counterpart) can be instantiated directly with no need for further configuration. Sounds pretty good!</p>

<h3 id="proofofconceptprismdryioczero">Proof of Concept - Prism + DryIocZero</h3>

<p>To put this approach through its paces with a more thorough test, I decided to try to configure a Prism app using DryIocZero. Though I haven't yet used Prism in a real app, I have the impression that it is both opinionated and configurable enough that integrating DryIocZero should need thought, but be possible.</p>

<p>In the end, the Prism source, plus a good post from <a href="https://twitter.com/DanJSiegel">@dansiegel</a> (<a href="https://dansiegel.net/post/2018/10/29/using-unsupported-di-containers-with-prism">Using unsupported DI containers with Prism</a>) were enough to set me on the right track. It seems to have worked out, and you can check out the final result <a href="https://github.com/rdavisau/prism-zero">here</a>. I started out with the Prism template and each commit covers a specific aspect of the conversion, so you can use those to follow the process if interested. I'll call out a few bits here.</p>

<h5 id="creatingregistrations">Creating Registrations</h5>

<p>A large amount of the setup lands in the <code>Registrations.ttinclude</code> file. This is the T4 template file that needs to be modified to include the container configuration. In addition to convention-based registrations for services, viewmodels and pages, my implementation moves the registration of Prism core types out of startup to be performed at compile time. </p>

<p>Parts of the set up are moved to seperate methods, so you can get a better idea of the high level flow: <a href="https://github.com/rdavisau/prism-zero/blob/master/PrismZero/PrismZero/DryIocZero/Registrations.ttinclude#L35-L62" target="_blank"> <img src="http://ryandavis.io/content/images/2020/01/dry-reg-flow.png" alt="" title=""> </a> The first call worth noting is the <code>LoadAssemblyWithDependencies</code> call. It is a crude method I pulled together which attempts to read the <code>deps.json</code> of the app assembly in order to load other required assemblies into memory. Since DryIocZero creates a real DryIoC container from which to generate the compile-time version, any types you are registering need to be loaded into the running environment. You can reference assemblies manually in the T4 template, but I found the approach of dynamic and reflection-based type resolution to be more manageable, particularly since there's no C# intellisense anyway. This implementation makes use of the <code>Microsoft.Extensions.DependencyModel</code> package to load referenced nuget package assemblies into memory.</p>

<p>The next point of note is the <code>RegisterPrismTypes</code> method. This takes the registrations found in the core <code>PrismApplicationBase</code> class, and instead wires them up as part of the compile-time container. This approach is somewhat brittle (would need to be kept in sync with any changes the Prism team makes), but is good to take off the startup path. </p>

<p><script src="https://gist.github.com/rdavisau/09d57eed64aa89ea6222637b84128bc9.js?file=RegisterPrismTypes.cs"></script>In addition to registering the types here, I intercept <code>RegisterRequiredTypes</code> in the app subclass, to prevent Prism from also trying to register the types. </p>

<p>The last interesting point in the registrations is probably the convention based page/vm registration. This scans for pages and viewmodels and registers them in the container according to Prism requirements.<script src="https://gist.github.com/rdavisau/09d57eed64aa89ea6222637b84128bc9.js?file=RegisterViews.cs"></script>It's mostly straightforward, but the last line is worth a mention. Each page/viewmodel combination is added to a list and an additional modification I made to the generation template results in the creation of a <code>RegisterPageTypes()</code> method on the container. This method can be  be called at runtime to configure the Prism page registry and viewmodel locator based on registrations, again without requiring assembly scanning. <br>
<img src="http://ryandavis.io/content/images/2020/01/reg-page-types.png" alt=""></p>

<h5 id="integratingwithprism">Integrating with Prism</h5>

<p>The other major modification to fit DryIoCZero is the creation of an <code>IContainerExtension</code>, an abstraction that Prism operates against to allow its use with different DI frameworks. Again, Dan's post <a href="https://dansiegel.net/post/2018/10/29/using-unsupported-di-containers-with-prism">here</a> provides a great overview. Since the container methods are very generic, most methods end up being simple argument pass-throughs to the <code>Container</code> implementation; to the point that we could probably just adopt <code>IContainerExtension</code> directly. In this case, I created a dedicated class. </p>

<p>In setting up the extension, I opted to completely remove the runtime DryIoC dependency, and disallow support for adding type to type registrations at runtime (delegates and instances are still supported). To my limited understanding, mutation of the container in Prism is primarily required to support the use of Prism modules - if you are not dynamically loading code or doing hot patching, you can live without supporting these kinds of registrations at runtime. </p>

<p>For the purposes of investigation, I did create another implementation of <code>IContainerExtension</code> that chained together a runtime DryIoc container and compile time DryIoCZero container, which you can see <a href="https://gist.github.com/rdavisau/09d57eed64aa89ea6222637b84128bc9#file-dryioczerowithfallbackcontainerextension-cs">here</a>. Using this implementation, you can combine both compile time registrations with dynamic runtime registrations if neccessary. By adding an <code>UnknownServiceResolver</code> rule to the runtime container, this even supports split resolutions. For example, if you dynamically load an admin module with types that depend on an existing <code>UserService</code> that was registered at compile time, DryIoC will resolve any new dependencies from the dynamic container and check the compile-time container for missing ones. I'm not sure of the overhead on this, but the technique is at least possible.</p>

<h3 id="whatstheimpact">What's the impact?</h3>

<p>I'm optimistic about the performance improvement that DryIoCZero can provide, especially on older devices. That said, I'm cautious about relying on the results of my benchmarks, because I'm not a pro benchmarker. I will say that these results seem consistent and are in line with the expectations that we have (i.e. that DryIocZero will be faster). These tests were run on a version of the PrismZero app with a few more services, pages, viewmodels and dependencies defined; not quite representative of a production-complexity app, but more complex than the template. </p>

<p>In order to appease my superstition of the possiblity of second-order effects, rather than measure only the container creation time, I opted to measure something closer to Prism app startup time (excluding Xamarin.Forms initialisation). This means starting the timer right before the built-in template calls <code>LoadApplication</code>, which includes instantiation of the <code>App</code> subclass. The timer is stopped in <code>OnInitialisedAsync</code>, after Prism startup and container registrations, and prior to any navigation. This means that the times do not represent the exact figures for compile time vs run time container instantiation, but - as the only thing changed between runs - give a good feel for the difference when using each. <img src="http://ryandavis.io/content/images/2020/01/prism-init-chart.png" alt="" title=""> If the results are reliable, the performance impact can be substantial on older devices. My expectation is that a DryIocZero container will scale more favourably with increased number and complexity of registrations, but I have not tested this. It would be awesome to see if someone else could reproduce a similiar result independently, to give me confidence that this doesn't include anything that makes it incorrect. </p>

<h3 id="nofreelunches">No free lunches</h3>

<p>Amazing! We should all switch to DryIocZero immediately, right? Maybe, maybe not. In reality, there are caveats, considerations and compromises that using DryIocZero involves. Before negatives, I want to run over the good bits.</p>

<h5 id="pros">Pros:</h5>

<p><strong>Low overhead:</strong><br>Of course, this is the primary benefit. Given the results we've seen, compile time generation of the container can take a bunch of the startup impact of DI out of the picture. </p>

<p>Resolution performance appears to depend on whether your code is JIT'ed or AOT'ed. When JIT-ing, DryIoC resolves slightly more quickly, whilst when AOT'd, DryIoCZero comes out on top in most cases. I reproduced this behaviour both on device and on my workstation using the IoC benchmarking project, so I am confident in this assessment.  </p>

<p><img src="http://ryandavis.io/content/images/2020/01/benchmarks.png" alt=""></p>

<p>That said, given these times are in milliseconds and are measuring 500,000 resolutions, the difference between resolution times for either is probably marginal. On the slowest device when JIT'ing I saw resolves take 2-3 ms longer using DryIocZero. For saving 500ms+ at startup, I consider that a fair trade.  </p>

<p><strong>No penalty for convention-based registrations:</strong><br>This is kind of part of the previous point, but worth an additional callout. For convenience, it's common to prefer to create registrations based on some kind of convention, or based on the presence of marker types. You can see this technique in use in the Prism demo app. </p>

<p>Writing code to register types based on conventions like this is nice, because we don't have to go back to the container and add new registrations every time we create new services etc. However, doing so requires reflecting over assemblies to find the matching types, which adds to startup time. With DryIocZero, this work gets done at compile time, so we get the convenience of convention based registration without addition cost.</p>

<p><strong>Less 'magic':</strong><br>Because DryIocZero generates real code into your project, there's less mystery involved in injection. As the developer, you can set breakpoints inside resolution methods if you want to step through the process. The compiler gets to work on the resolution code as well, so it is eligible for AOT and optimisation. The linker is also better informed because of the lack of indirection. For example, it can see exactly which constructors are being invoked on types, and won't link them out. In the case of convention based registrations, where eligible types would normally only be detected at runtime, the linker again gets the benefit of seeing the types referenced in the container at compile time. </p>

<p><strong>Extensible:</strong><br> Probably both a pro and a con of DryIocZero is its delivery method - a set of classes you add to your project, including a T4 template. Since the template itself can be modified you can add your own code generation functionality. An example of this is the automated generation of <code>RegisterPageTypes()</code> in the demo app.  </p>

<h5 id="cons">Cons:</h5>

<p>Unfortunately. it's not all sunshine and rainbows 😭. There's no denying that there's more effort involved in setting up and maintaining injection via DryIocZero, for a variety of reasons. Installation is more involved than your standard NuGet (but the instructions are easy to follow). Some other considerations have reasonable solutions or workarounds that I worked out quickly at the start, others could pose ongoing development time overhead. </p>

<p>As I look to try using DryIocZero on a new project, I'll find out whether it's truly viable. I'm think that I probably have a higher 'inconvenience' threshold than many <small><i>(but hey because of that I've been hot reloading code forever thanks Frank)</i></small>. For now, here are some pain points that I found.</p>

<p><strong>No access to runtime state in registrations:</strong><br>
A well documented limitation of Zero is that you can't preconfigure implementations that depend on runtime state. Since the container is effectively created ahead of time, this makes sense, and rules out the use of methods like <code>RegisterInstance(...)</code> and <code>RegisterDelegate(c =&gt; ...)</code>. </p>

<p>So what if you have a dependency with an implementation that can't be known until runtime? In situations like that, you can use the <code>RegisterPlaceholder</code> method to tell DryIocZero that you will provide an implementation at runtime. That allows DryIocZero to construct resolution implementations with (appropriately) placeholders for the missing types, allowing the container to compile. As an example, here is the code generated in a resolution if we specify that the <code>IamNested</code> dependency from the beginning example will be provided at runtime:</p>

<p><img src="http://ryandavis.io/content/images/2020/01/nested.png" alt="" title=""> <small><center><i>
It looks a bit scary, but it's mostly a <code>Resolve</code> call with all the optional arguments specified. <br>
Can't say I understand the 'preResolveParent' argument though 🤔</i></center></small></p>

<p>If we don't register a placeholder at all, DryIocZero will not generate any roots that involve graphs with that dependency - how could they compile? In the case of our example, this would remove <code>B</code> from the generated code - and write an error at the top of the container like the below:</p>

<p><img src="http://ryandavis.io/content/images/2020/01/nonested.png" alt=""></p>

<p>Although it's good the the container is validated, I actually tend to think that a missing dependency should cause the entire container to fail generation. In the above case, since the container code still compiles, you could end up hitting the missing dependency at run time.</p>

<p><strong>Semi-mutable implications</strong>:<br>
I'm coining the term 'semi-mutable' to describe a DryIocZero container's mutability. A container generated by Zero is actually independent and self-contained - although the generation process uses DryIoc, the output is lighter-weight and has no such dependency. There are various implications to this, but one to keep in mind is that it is not an <code>IContainer</code> so can't be used in place of one, and does not support everything an <code>IContainer</code> supports. Looking at <a href="https://github.com/dadhi/DryIoc/issues/101">this issue</a>, this is something that might change in the future, but it is this way for now. </p>

<p>We've seen that a DryIoCZero container supports some mutation - specifically, the ability to replace placeholders, or add new registrations with instance or delegate implementations. However, it doesn't support the 'standard' registration syntax:</p>

<pre><code>// this overload does not exist on a zero-generated container
container.Register&lt;IThing, Thing&gt;();  
</code></pre>

<p>If you think about it, this is reasonable. We're telling the container that it needs to be able to later resolve <code>IThing</code> to a <code>Thing</code>, which may have arbitrary dependencies and its own resolution strategy - but this isn't a <code>DryIoc.Container</code> that knows how to work out things like that. This is lightweight, purpose-built container class designed to resolve exactly what we told it to be able to resolve at build time. </p>

<p>For most purposes, this probably isn't a big deal. However, in an interpreter powered, hot-patch-friendly world, you might need a fall back. One option is to provide your own delegate that reflects over <code>Thing</code> and attempts to resolve the dependencies from the container. It won't be blazing fast, but given that you've already accepted the perf hit of your new dynamic code being interpreted (on iOS, at least), this might be acceptable. Another option is to chain the DryIocZero container to a fully fledged DryIoC container, as I mentioned when talking about the Prism demo.</p>

<p><strong>Configuring the configuring of the container:</strong><br> Again, a strength and weakness of DryIocZero is its implementation via T4 templating. It's a clever and workable solution, but T4 templates are a bit awkward to write code in. For that reason, I'm tending towards favouring convention-based registrations - the idea being that if you get the conventions set up properly at the start, you'll rarely need to revisit it.</p>

<p>The obvious challenge is the lack of C# autocompletion. Since we're typing into a T4 template that's to be expected, but it means working out overloads and namespaces can be harder. </p>

<p>The bigger challenge is making sure that the environment that's executing the template has access to all the required types to be registered. You can add references to the T4 template to have assemblies loaded, and the first one you'll need to add is a reference to your own app. That's easy enough thanks to a few available substitutions. I don't love it but I can live with it:</p>

<p><img src="http://ryandavis.io/content/images/2020/01/dep.png" alt=""></p>

<p>With that reference added (and your project built), you can now refer to your types. However, if you need to register something from another assembly, <em>or a type of yours that derives from a type defined in another assembly, like a Xamarin.Forms <code>ContentPage</code> subclass</em>, you'll need to reference that assembly too. One way to handle this is to keep adding references to the T4 template as you find that you're missing them, but that isn't much fun. An alternative approach is to dynamically load the dependencies of your app before configuring the container, and use reflection-based techniques to reference types. It sounds bad, but since we already aren't getting autocomplete and we're favouring convention-based registration I think it's the better approach. This is the technique I used in the Prism proof of concept.</p>

<p>🐔 <strong>Which came first 🥚:</strong><br>As shown earlier, the T4 template used by DryIocZero needs to use the types from your project to know how to generate the container implementation. That means that your project already needs to be built to generate the container. Of course, once the container is generated, your project needs to be rebuilt to make use of the generated container implementation. Aaahhh. </p>

<p>In reality, the way DryIocZero is implemented means that it avoids a true chicken-and-egg problem, because the generated code goes into a partial class. Since the other part of the partial class is always in the project, there's no problem in having your project reference the container even when the resolution implementations haven't been generated yet. </p>

<p>However, changes to your project can cause the container to fall out of sync. For example, adding a new dependency to an existing registered type will break the generated container, as it will be missing the new constructor argument. To fix this, we just need to regenerate the container. Unfortunately, to regenerate the container we need to build the project with the updated type dependencies, and we can't build the project because the existing generated container is missing the construc-- Aaaaaaaaaaaahhh. </p>

<p>The answer is to just delete the generated container before trying to build, but this can be a bit arduous and awkward. I feel like with some clever project structuring this could be improved, which might work in with the next point.</p>

<p><strong>Platform dependencies:</strong><br>This is probably the major outstanding question I have around putting together a reasonable implementation of DryIocZero in a Xamarin app. Typically, cross-platform apps define their DI container in the shared project, with a mechanism that allows the platform heads to perform any platform-specific registrations for implementations that exist in the heads only. This can be as simple as providing an <code>Action&lt;Container&gt;</code> to the shared project during initialisation, or using something like Prism's <code>PlatformInitializer</code> pattern. How to handle this using DryIocZero? </p>

<p>The easiest option is to just define those dependencies as placeholders and register them at runtime. This again needs to work in with the constraints around runtime registration - constructed instances or delegates only, which implies the need to hand-resolve any dependencies. Looking over existing projects, this is probably an acceptable approach - between Xamarin.Essentials, other plugins and netstandard, many platform dependencies are already abstracted. </p>

<p>The better option might be to move generation of the container (now, container<em><strong>s</strong></em>) to act on the platform heads. A shared project could host the DryIocZero classes, and the template could <code>&lt;@ include @&gt;</code> a file that exists in each head with the implementation for <code>RegisterPlatformDependencies()</code>. It might be worth investigation. </p>

<h3 id="dotheprosoutweighthecons">Do the Pros outweigh the Cons?</h3>

<p>Whew, that was a lot of words. </p>

<p>I do see several challenges to using DryIoC in anger in its current form, but I think the answer comes down to the project, team and tolerance for straying from the beaten path. There are other aspects I can think of but haven't investigated yet, such as working this into a build pipeline and how it might work in a team and with PRs. My spidey sense says that the container should probably not be checked in, and should be generated at build time. </p>

<p>Personally, I'm game to give a try, and will be using in on my next project. If it's not working out, falling back to DryIoc proper involves little more than copying the configuration code out of the T4 template and into the app, making it a relatively low-risk opportunity. If people are interested, I'll keep them up to date with how it goes!</p>]]></description><link>http://ryandavis.io/adventures-in-low-overhead-dependency-injection-using-dryioczero/</link><guid isPermaLink="false">aa0bdf26-e092-499a-a97d-4688ef5a7d46</guid><category><![CDATA[xamarin]]></category><category><![CDATA[code]]></category><category><![CDATA[dependency-injection]]></category><category><![CDATA[performance]]></category><category><![CDATA[dryioc]]></category><category><![CDATA[dryioczero]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Thu, 09 Jan 2020 10:35:00 GMT</pubDate></item><item><title><![CDATA[(More of) What's new in iOS13]]></title><description><![CDATA[<p>At the <a href="https://www.meetup.com/Melbourne-Xamarin-Meetup/">Melbourne Xamarin Meetup</a> <a href="https://www.meetup.com/en-AU/Melbourne-Xamarin-Meetup/events/266041977/">November 2019 Meetup</a>, I gave a second run of my &nbsp;<em>"<a href="https://ryandavis.io/some-of-whats-new-in-ios13/">(Some of) What's new in iOS13</a>"</em> talk, which (appropriately) covers new features and frameworks in the latest version of iOS. Since the talk included a number of new demos, this one ended up with the title <em>"(More of) What's new in iOS13"</em>. You can see more about the original talk in my <a href="https://ryandavis.io/some-of-whats-new-in-ios13">earlier post</a>, which covered <strong>Dark Mode</strong>, <strong>PencilKit</strong>, <strong>ARKit</strong> and <strong>CoreML</strong>. In addition to those, the 'More of' edition included new segments on <strong>Multi-Window apps</strong>, <strong>Sign-in with Apple</strong>, and <strong>CoreNFC</strong>.</p>

<p>This meetup itself was streamed on Twitch, <!-- and the recording is also now published on YouTube, --> so you can also watch the full thing back at your leisure (~1h 15m):</p>

<p><center></center></p>

<iframe width="952" height="536" src="https://www.youtube.com/embed/y0bsdeHEHhI" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

 <!--
 <iframe src="https://player.twitch.tv/?autoplay=false&video=v510820787" frameborder="0" allowfullscreen="true" scrolling="no" height="378" width="620"></iframe><a href="https://www.twitch.tv/videos/510820787?tt_content=text_link&tt_medium=vod_embed" style="padding:2px 0px 4px; display:block; width:345px; font-weight:normal; font-size:10px; text-decoration:underline;">Watch Melbourne Xamarin Meetup - Ryan Davis - What's new in iOS 13 from melbournexamarinmeetup on www.twitch.tv</a>
 -->

<p></p>

<p>Slides are available at the end of the post, and a little detail on each of the new areas from this talk is below.</p>

<h3 id="multiwindow">Multi Window</h3>

<p><small><strong>WWDC Reference:</strong> <a href="https://developer.apple.com/videos/play/wwdc2019/212/">Introducing Multiple Windows on iPad <br>
</a></small></p>

<p>On iPad, iOS13 (strictly, 'iPadOS') adds the ability for individual apps to work with more than one window.</p>

<p><center>  </center></p>

<video controls autoplay loop width="100%">  
    <source src="https://ryandavis.io/content/images/2019/12/many-windows.mp4" type="video/mp4">
</video>  

<p></p>

<p>There are various ways that a new window can be created - from the dock, from a shortcut item, via a drag and drop operation, or programmatically. </p>

<p><center>  </center></p>

<video controls autoplay loop width="31%">  
    <source src="https://ryandavis.io/content/images/2019/12/app-launcher.mp4" type="video/mp4">
</video> 

<video controls autoplay loop width="31%">  
    <source src="https://ryandavis.io/content/images/2019/12/drag-and-drop.mp4" type="video/mp4">
</video>

<video controls autoplay loop width="31%">  
    <source src="https://ryandavis.io/content/images/2019/12/shortcut-item.mp4" type="video/mp4">
</video>  

<p></p>

<p>The positioning of an app's visible windows are fixed to a few variations (split pane or floating - same as with multi-app multitasking). However, the total number of windows an app can have (that is, including backgrounded windows) is effectively unlimited. </p>

<p>To model multiple window architectures, iOS13 introduces a few new supporting classes; importantly the <code>UI(Window)Scene</code> and <code>UISceneDelegate</code> classes - one of each will be instantiated for every new window created by the application. The <code>UISceneDelegate</code>, among other things, handles the 'UI lifecycle' of the window, so if you opt in to supporting multiple windows, iOS will stop calling the <code>UIApplicationDelegate</code> UI lifecycle methods and instead call them on the <code>SceneDelegate</code>(s) of the windows involved in the changes. Whilst multiple windows can offer a lot of power and flexibility, it does come at the cost of additional complexity. For comparison, I diagrammed an indicative object instance graph in single window and multi window scenarios. </p>

<p><img src="http://ryandavis.io/content/images/2019/12/no-problems.png" alt="a diagram that shows the relatively simple UIApplication/Delegate/Window instance heirachy in a traditional single window app - one to one to one - three objects involved in the display of one window"></p>

<p><img src="http://ryandavis.io/content/images/2019/12/mo-problems.png" alt="a diagram that shows the comparatively more complicated UIApplication/Delegate/UIScene/UISceneDelegate/Window instance heirachy in a multi window app, with 17 objects involved in the display of three windows"></p>

<p>Although it's true that the second example contains three windows (so we should expect it to be more complicated), everything on the left half of the diagram is required even when a single window is used. Supporting multiple windows also comes with other considerations such as:</p>

<ul>
<li>Accounting for iPhones of all versions, and iPads running iOS12 and lower (no multi-window support)</li>
<li>Removing any baked in assumptions about the existence of just one window </li>
<li>Seperating process-level concerns and initialisation from interface level initialisation</li>
<li>Cross-platform experience</li>
</ul>

<p>Given the above, I'd personally be looking at multi-window only if I had a really strong case for it in a mobile/tablet app. On the other hand, for an iPad-focussed app - given Marzipan - it might make sense to start adopting this now. </p>

<h3 id="signinwithapple">Sign In With Apple</h3>

<p><small><strong>WWDC Reference:</strong> <a href="https://developer.apple.com/videos/play/wwdc2019/706/">Introducing Sign In with Apple <br>
</a></small></p>

<p>Sign In With Apple (illegally abbreviated to SIWA here and in the slides) is a new OIDC compliant (with some <a href="https://bitbucket.org/openid/connect/src/default/How-Sign-in-with-Apple-differs-from-OpenID-Connect.md]">'peculiarities'</a>) auth mechanism that ships with iOS13. It provides a streamlined auth experience for both users and developers, by leveraging features circumstances guaranteed by the use of a signed-in Apple device. </p>

<p>From the user's perspective, SIWA offers a familiar, convenient and privacy-friendly signup/auth flow. No passwords are involved, and the user can choose to share or hide personal details such as their name and email address, without compromising the functionality of the application. </p>

<p><center>  </center></p>

<video controls autoplay loop width="100%">  
    <source src="https://ryandavis.io/content/images/2019/12/sign-in-with-apple.mp4" type="video/mp4">
</video>  

<p></p>

<p>When the user selects the 'Hide my email address' feature, Apple provides the developer with a random-looking email address, and relays any emails sent to that address to the user's actual address. The user can disable this relay from Settings at any time.</p>

<p><img src="http://ryandavis.io/content/images/2019/12/settings.jpg" alt="image of the demo app's entry under iCloud Passwords and Security in the iOS Settings app. It shows the relay email address that Apple provided to the developer and includes options that allow the user to either disable email relay, or completely stop using their Apple ID with the app."></p>

<p>From the developer's perspective, SIWA also offers some benefits. Apple provides buttons and sign-in UI for the process that can be invoked with a small amount of code. SIWA also provides a stable user identifier across devices and a reliable assessment of whether it thinks the user is real (as opposed to being a bot). </p>

<p><img src="http://ryandavis.io/content/images/2019/12/siwa-buttons.png" alt="images of two styles of " sign="" in="" with="" apple"="" buttons,="" one="" a="" black="" filled="" background="" and="" white="" text,="" the="" other="" fill="" text."=""></p>

<p>According to its developer policy, Apple will require apps that rely solely on social login to include SIWA as a sign up/auth option. I read somewhere that it will be enforced for existing apps from March 2020 but don't quote me on that.</p>

<h3 id="corenfc">CoreNFC</h3>

<p><small><strong>WWDC Reference:</strong> <a href="https://developer.apple.com/videos/play/wwdc2019/715/">CoreNFC Enhancements</a></small></p>

<p>CoreNFC allows developers to add Near-Field Communication (NFC) capabilities to their apps when running on devices with the appropriate hardware (all iPhones since the iPhone 7, but not iPads). First introduced in iOS11, CoreNFC has traditionally offered a relatively restricted set of features to developers (when compared to that of Android), essentially limiting interactions to NDEF tag reading. With iOS13, CoreNFC supports new features that significantly increase the applications of NFC on iOS devices. </p>

<p><strong>NDEF tag writing:</strong> <br>
It's now possible to write NDEF messages to tags using the <code>WriteNdef</code> method on <code>INFCNdefTag</code>. CoreNFC provides convenience methods for constructing string and URI-based NDEF payloads via the <code>NFCNdefPayload.CreateWellKnownTypePayload</code> overloads. The addition of NDEF writing helps bring iOS NFC capabilities closer to parity with that of Android.</p>

<p><strong>Native tag protocol implementations:</strong> <br>
In iOS13, CoreNFC also allows developers to interact with several kinds of NFC tags via their native protocols. This includes ISO7816, ISO15693, FeliCa, and MIFARE tags. With the appropriate entitlement, and use of the new <code>NFCTagReaderSession</code> and <code>NFCTagReaderSessionDelegate</code> classes, you can now detect these tags and send commands to them using dedicated interfaces:</p>

<p><img src="http://ryandavis.io/content/images/2019/12/native-tag-kinds.png" alt="an image of VS4Mac autocomplete on an `INFCTag` that shows the possible completions for `GetNFC` - `GetNFCFeliCaTag`, `GetNFCIso15693Tag`, `GetNFCIso7816Tag`, GetNFCMiFareTag`"></p>

<p>For demonstration purposes only and not because I am a nerd, I was most interested in trying out the MIFARE tag support, knowing that tags in Nintendo Amiibo use that technology. After consulting the MIFARE datasheet and reverse-engineered Amiibo data structure, I was able to successfully detect and read game/character identifiers from an amiibo using MIFARE commands, which could then be send to Amiibo API to perform an Amiibo lookup. Cool! </p>

<p><img src="http://ryandavis.io/content/images/2019/12/proto.png" alt="image showing invocation of the `SendMifareCommand` next to a snapshot of the MIFARE datasheet overview of the same command, with lines associating the parameters of the method invocation to the parameters described on the datasheet. Next to that, a snapshot of the Amiibo Page layout table from 3dsbrew.org">
<img src="http://ryandavis.io/content/images/2019/12/amiibo.jpeg" alt="image showing the console output from the demo app when an Amiibo is successfully scanned. It includes a hex dump, key game/character identifiers and the response from amiiboapi. The scanned amiibo (Link from Majora's mask) is pictured next to it."></p>

<p>(In practice, I found the scanning to be a little bit unreliable, but I could have been holding it wrong)</p>

<p>Not satisfied with just identifying an amiibo, I also investigated the legally, morally, and ethically murky world of Amiibo cloning. In the image of the MIFARE datasheet above, the <code>FAST_READ</code> command is shown, but there is also a <code>WRITE</code> command that allows data to be written to a specified page. With the right kind of blank tag (NTAG215), and a mechanism for decrypting and reencrypting an Amiibo dump, it's possible to use the <code>WRITE</code> command to clone an Amiibo in a manner that a Nintendo Switch considers valid.</p>

<p><center>  </center></p>

<video controls loop width="55%">  
    <source src="https://ryandavis.io/content/images/2019/12/by.mp4" type="video/mp4">
</video>  

<p></p>

<p>Cloning Amiibo raises so many ethical and moral questions, but more importantly, it relies on secret Nintendo key material that shouldn't be checked into the repo. Given that, I removed the cloning implementation from the demo app before checking it in, but left the high level flow in the comments for the inspired reader. It isn't too difficult (standing on the shoulders of giants), but it is a little involved. </p>

<h2 id="demoapp">Demo App</h2>

<p>The "Hello iOS13" demo app has been updated to include the new demos on Github: <a href="https://github.com/rdavisau/hello-ios13">rdavisau/hello-ios13</a>.</p>

<p><img src="https://ryandavis.io/content/images/2019/08/menu-mini-mini.jpeg" alt="menu screen"></p>

<h2 id="slides">Slides</h2>

<p>Links for the slides are below. </p>

<p>Slides (56): <a href="https://ryandavis.io/content/images/2019/11/%28More_of%29_What_s_new_in_iOS13_-_Ryan_Davis_20191120.pdf">PDF</a></p>

<table>  
<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/11/whatsnew/Slide4.PNG" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2019/11/whatsnew/Slide10.PNG" alt="">
</td>  
</tr>

<tr><td width="373">  
<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide8.PNG" alt="">
</td>

<td width="373">

<img src="http://ryandavis.io/content/images/2019/11/whatsnew/Slide24.PNG" alt="">
</td>  
</tr>

<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/11/whatsnew/Slide30.PNG" alt="">
</td>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/11/whatsnew/Slide33.PNG" alt="">
</td>  
</tr>

<tr><td width="373">  
<img src="http://ryandavis.io/content/images/2019/11/whatsnew/Slide39.PNG" alt="">
</td>

<td width="373">

<img src="http://ryandavis.io/content/images/2019/11/whatsnew/Slide51.PNG" alt="">
</td>  
</tr>

</table>]]></description><link>http://ryandavis.io/more-of-whats-new-in-ios13/</link><guid isPermaLink="false">1d1fe344-2c2b-4f3e-9757-86f04a74a370</guid><category><![CDATA[xamarin]]></category><category><![CDATA[almost-famous]]></category><category><![CDATA[xamarin.ios]]></category><category><![CDATA[ios13]]></category><category><![CDATA[amiibo]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Tue, 03 Dec 2019 22:23:00 GMT</pubDate></item><item><title><![CDATA[Custom Machine Learning Made Easy With ML.NET]]></title><description><![CDATA[<p>A few months back, I wrote a post for Progress/Telerik devblogs on building offline machine learning models for ML.NET's quickly and easily using the Model Builder GUI. The piece briefly introduces ML.NET and then follows with a walkthrough of training an interaction prediction engine for New York taxi fares. That post was finally published this month, so you can read all about it over at Telerik blogs:</p>

<p><center> <br>
<a href="https://www.telerik.com/blogs/custom-machine-learning-with-mlnet"> <br>
<img src="http://ryandavis.io/content/images/2019/12/mlnet.png" alt="">
</a> <br>
</center></p>

<p>Although I mostly work on Xamarin things, ML is an area of interest and I'm glad to have a good .NET-friendly framework to work with in ML.NET. Lately I have been playing with Tensorflow and ML.NET and realtime processing of video game feeds, to do things like scene detection and digit classification (score, health) etc.</p>

<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">My (ridiculous) <a href="https://twitter.com/linqpad?ref_src=twsrc%5Etfw">@linqpad</a>-based Smash Brothers Classic Mode tracker is coming along, now with damage recognition thanks to a custom digit classifier + <a href="https://twitter.com/migueldeicaza?ref_src=twsrc%5Etfw">@migueldeicaza</a>&#39;s TensorFlowSharp binding.. very easy to consume a Tensorflow model from the .NET world 🤓🎮 <a href="https://t.co/grDnFnqemA">pic.twitter.com/grDnFnqemA</a></p>&mdash; Ryan Davis (@rdavis_au) <a href="https://twitter.com/rdavis_au/status/1109259433777852416?ref_src=twsrc%5Etfw">March 23, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>]]></description><link>http://ryandavis.io/custom-machine-learning-made-easy-with-ml-net/</link><guid isPermaLink="false">1af8d1c8-546d-4998-ad39-fc70469898f7</guid><category><![CDATA[almost-famous]]></category><category><![CDATA[ml.net]]></category><category><![CDATA[machine-learning]]></category><category><![CDATA[.net]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Sat, 23 Nov 2019 02:23:00 GMT</pubDate></item><item><title><![CDATA[Using Diffable Data Sources in Xamarin.iOS]]></title><description><![CDATA[<p>iOS13 includes a large number of exciting new features, some of which I outlined in a <a href="https://ryandavis.io/some-of-whats-new-in-ios13">previous post</a>. One addition in the release that has received less attention is the new option for <code>UITableView</code> and <code>UICollectionView</code> data source configuration - the 'diffable data source'. The aim of diffable data sources is to greatly simplify the interactions between a dataset and <code>UITableView</code>/<code>UICollectionView</code> when changes occur, by allowing UIKit itself to manage the diffing, reload and animation of sections and cells. As a more opinionated API, this feature bakes in some assumptions that are undesirable from a Xamarin.iOS perspective. However, with the use of a few helper methods and classes - it's possible to make use of it from Xamarin.iOS without too much trouble. </p>

<h4 id="doineedthisinmylife">Do I need this in my life?</h4>

<p>The answer for many people is probably <strong>'no'</strong>. If you're a Xamarin.Forms user, you don't typically deal directly with <code>UITableView</code> and <code>UICollectionView</code> APIs, so this probably wont be of interest. Even if you work 'natively' with Xamarin.iOS, application of the MVVM pattern typically results in the delegation of datasource/control interaction to your MVVM framework of choice via bindings or dedicated helper classes. For example, <a href="https://reactiveui.net/">ReactiveUI</a> includes a <code>ReactiveTableViewSource</code> class that allows you to work with a <code>UITableView</code> in a more MVVM-friendly way, relieving you of the need to deal directly with nasty <code>indexPaths</code> and APIs like <code>BeginUpdates</code>. In this case, again, the addition of diffable data sources probably won't interest you.</p>

<p>If you do build iOS-only applications using Xamarin.iOS, like I do from time-to-time, diffable data sources do present an interesting option for table/collection view configuration.</p>

<h4 id="theoldway">The old way</h4>

<p>If you've ever worked with a <code>UITableView</code> or <code>UICollectionView</code> directly from Xamarin.iOS, you're probably aware that you control the way those controls get data by passing in an instance of a 'data source', typically a subclass of <code>UITableViewDataSource</code> or <code>UICollectionViewDataSource</code>. For simplicity's sake, I'll focus on <code>UITableView</code> from now on, which requires a <code>UITableViewDataSource</code> to implement callback methods that answer three key questions: </p>

<ol>
<li>How many sections does the data for this table have?  </li>
<li>How many rows does a given section in the table have?  </li>
<li>What cell should be displayed for a given row in the table?</li>
</ol>

<p>Essentially, when the tableview is put on screen or reloaded, it calls the first two methods to determine the size of the dataset. Then it calls the third method as and when it needs to display cells on screen. Your implementations typically call to a backing store like a <code>List&lt;T&gt;</code> for things like the counts. When I first started out with Objective C this was a bit of a mind bender for me, but a callback-based configuration can be very flexible.</p>

<p>The challenge with the API then, is handling changes to the dataset. For example, inserting and removing rows, or reordering them. Changes of this nature need to first occur in the backing store (e.g. your <code>List&lt;T&gt;</code>), but then the precise set of changes need to be described to the tableview in order to have it animate the changes nicely. For example, if the second item from the list was removed, and a new one was added at the end, a set of calls like this would be made:</p>

<pre><code>TableView.BeginUpdates();

// remove the second item
TableView.DeleteRows(new [] { NSIndexPath.FromRowSection(1, 0) });

// add item at the end
TableView.InsertRows(new [] { NSIndexPath.FromRowSection(5, 0) });

TableView.EndUpdates ();  
</code></pre>

<p>This is a simple example, but changesets can be quite complicated to describe. Your code 'describes the diff' between the current state and new state; a task that can be generalised but is not very fun. Getting it wrong results in the dreaded update assertion and crash:</p>

<pre><code>*** Assertion failure in -[UITableView _endCellAnimationsWithContext:]
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 1.  The number of rows contained in an existing section after the update (1) must be equal to the number of rows contained in that section before the update (2), plus or minus the number of rows inserted or deleted from that section (0 inserted, 0 deleted) and plus or minus the number of rows moved into or out of that section (0 moved in, 0 moved out).' 
</code></pre>

<p>A diff algorithm is a diff algorithm, and should 'always work' once you get it right. Still, in an <code>async</code>-friendly world, inconsistency can also occur if the backing store is touched during an update cycle. </p>

<p>A common 'resolution' for issues of this nature was to abandon <code>Begin/EndUpdates</code> and call <code>ReloadData</code>, which causes a full/expensive, non-animated, reload of the entire table view. Apparently Apple was unhappy that this kept taking place, and now we have a new option. </p>

<h3 id="newshinythediffabledatasource">New shiny - the diffable data source</h3>

<p>With the introduction of iOS13, three new interesting classes appear in UIKit/Foundation. First, we have two new data source base classes:</p>

<ul>
<li><code>UITableViewDiffableDataSource&lt;TSectionIdentifier, TItemIdentifier&gt;</code></li>
<li><code>UICollectionViewDiffableDataSource&lt;TSectionIdentifier, TItemIdentifier&gt;</code></li>
</ul>

<p>Additionally, there's a 'snapshot' class:</p>

<ul>
<li><code>NSDiffableDataSourceSnapshot&lt;TSectionIdentifier, TItemIdentifier&gt;</code></li>
</ul>

<p>All three classes are generic to a section and item identifier type. The idea behind these classes is that rather than implement callbacks like <code>NumberOfSections</code>, and <code>RowsInSection</code>, we instead create and provide a snapshot of our data to a diffable data source using a new <code>ApplySnapshot(snapshot, animatingDifferences)</code> method, and iOS automatically determines the changes, updating the tableview contents and animating the transition for us. </p>

<p><center>  </center></p>

<video controls autoplay loop width="100%">  
    <source src="https://ryandavis.io/content/images/2019/11/dds.mp4" type="video/mp4">
</video>

<p><em><small><smaller>Look ma! No manual diffing!</smaller></small></em></p>

<p>Cool right? By using a diffable data source, we save ourself the hassle of manually specifying the way changes should be animated. But how does a diffable data source know what has and hasn't changed? </p>

<p>You might reasonably guess that the diffing is performed by keeping track of which object instances are added or removed between snapshots. In practice, that would be too brittle - actions like refreshing data from an API typically result in the creation of an entire new set of objects - so the diffing needs to work more on a basis of semantic equivalence. </p>

<p>The mechanism for this ends up coming down to those two generic type arguments we saw earlier:  <code>TSectionIdentifier</code> and <code>TRowIdentifier</code>. In native iOS land, these type arguments must conform to the <code>Hashable</code> protocol, which essentially defines the iOS equivalents of .NET's <code>GetHashCode</code> and <code>Equals</code> methods. Similar to .NET, in iOS the ultimate base class <code>NSObject</code> conforms to <code>Hashable</code> and the appropriate methods are exposed in Xamarin.iOS as <code>GetNativeHash</code> and <code>IsEqual</code>. UIKit uses these methods on the section and row types to decide whether items are being added, removed or rearranged between snapshots. So all we have to do is implement those, right?</p>

<h3 id="diffablepowerswithoutderivingfromnsobject">Diffable powers without deriving from <code>NSObject</code></h3>

<p>The problem with having to implement <code>GetNativeHash</code> and <code>IsEqual</code> in Xamarin.iOS is that it requires types to be derived from <code>NSObject</code>, which (at least for me), is an unacceptable constraint. First and foremost, <code>NSObject</code> is not portable. Beyond that, deriving models from it precludes deriving them from other base types, and a need to derive from <code>NSObject</code> means you can't use types that you don't own, because you can't force them to derive from <code>NSObject</code>. What's the answer then? There are probably a few options, but the one I find least intrusive is the use of a wrapper type. The most basic implementation is one that forwards <code>NSObject</code> hash calls to the inner .NET type, like this:</p>

<p><center>  </center></p>

<script src="https://gist.github.com/rdavisau/20bfa9e46e89cf31d409f8796cc41cf1.js?file=PassthroughIdentifierType.cs"></script>  

<p></p>

<p>Now, given a <code>T</code> that has a meaningful <code>GetHashcode()</code> implementation, we can pass an <code>IdentifierType&lt;T&gt;</code> to our diffable data source and snapshot methods. Another approach I like is to provide a <code>Func&lt;T,U&gt;</code> that will return the hash-friendly representation of <code>T</code>. That would look like this:</p>

<p><center>  </center></p>

<script src="https://gist.github.com/rdavisau/20bfa9e46e89cf31d409f8796cc41cf1.js?file=FuncIdentifierType.cs"></script>  

<p></p>

<p>With the latter approach, we can use any type with diffable data sources, even if we don't own it and can't modify it to provide a meaningful <code>GetHashCode()</code> implementation.</p>

<h3 id="diffablepowerswithoutdoublingyourloc">Diffable powers without doubling your LOC</h3>

<p>This still isn't that great. Wrapping the types means writing verbose, heavy handed code with a lot of type parameters all over the place. That said, it should be possible to enscapsulate that in a way that gives us a more pleasant consuming experience. I present my attempt, <code>EasyUITableViewDiffableDataSource</code> (name subject to change).</p>

<p><center>  </center></p>

<script src="https://gist.github.com/rdavisau/20bfa9e46e89cf31d409f8796cc41cf1.js?file=EasyUITableViewDiffableDataSource.cs"></script>  

<p><em><small><smaller>(It's easy to consume, not easy to read ok)</smaller></small></em></p>

<p><code>EasyUITableViewDiffableDataSource</code> (name subject to change) exposes a much friendlier API to a C# consumer. It removes the need to think about anything <code>NSObject</code>-related, and lets the consumer work directly with .NET types, taking <code>Func</code>s at the time of creation that configure section and row identifiers. It also exposes a new overload of the <code>ApplySnapshot</code> method that works directly with .NET types and handles the busywork of preparing a UIKit-friendly snapshot using our <code>IdentifierType</code>s automatically. This is almost perfect, but it has one glaring issue - the type signature:</p>

<p><img src="http://ryandavis.io/content/images/2019/11/image-5.png" alt="" title=""> </p>

<p>A type signature like the one above this poses two major issues. The first is verbosity and awkward usage - when you <code>new</code> an instance of a type like this, you have to specify each type parameter explicitly - it can't be inferred from constructor arguments. </p>

<p>The second is the brittleness - specifying each of the type parameters explicitly (in the constructor, and/or in a field/property definition) means lots of places to make changes if you decide your row or section identifier needs to be different. Not to worry, these concerns with <code>EasyUITableViewDiffableDataSource</code> (name subject to change) can be addressed.</p>

<h3 id="diffablepowerswithouttoomanyts"><code>&lt;Diffable, powers&lt;without&lt;too, many&gt;, Ts&gt;&gt;</code></h3>

<p>The answer is a static helper that performs the creation of the data source. A static helper method is eligible for method argument type inference, which resolves the verbosity during use issue. </p>

<p>The brittleness can also be addressed by recognising that the existing set of generic type arguments expose what are primarily configuration/implementation details. The only interesting type argument to the consumer after the data source is created is the top level element type (<code>TSection</code>), which is what gets passed to the .NET friendly <code>ApplySnapshot</code> method. Knowing that, we can hide the gory details of the  <code>EasyUITableViewDiffableDataSource</code> (name subject to change) behind an interface, and return the interface from our static helper method.</p>

<p><center>  </center></p>

<script src="https://gist.github.com/rdavisau/20bfa9e46e89cf31d409f8796cc41cf1.js?file=IUITableViewDiffableDataSource.cs"></script>  

<p></p>

<p><center>  </center></p>

<script src="https://gist.github.com/rdavisau/20bfa9e46e89cf31d409f8796cc41cf1.js?file=EasyUITableViewDiffableDataSourceHelper.cs"></script>  

<p></p>

<p>The helper also gives us the opportunity to address suboptimal API in the original diffable data source types, which require generic type arguments to be specified but do not make use of them when providing arguments for the <code>getCell</code> callback. </p>

<h3 id="wasitworthit">Was it worth it?</h3>

<p>Whew, that was a ride. But now, a .NET friendly diffable data source can be configured very naturally:</p>

<p><center>  </center></p>

<script src="https://gist.github.com/rdavisau/20bfa9e46e89cf31d409f8796cc41cf1.js?file=Using%20EasyUITableViewDiffableDataSourceHelper.cs"></script>  

<p></p>

<p>As a reminder, that configuration gets you free tracking on every call to <code>ApplySnapshot</code> (same video as earlier):</p>

<p><center>  </center></p>

<video controls autoplay loop width="100%">  
    <source src="https://ryandavis.io/content/images/2019/11/dds.mp4" type="video/mp4">
</video>  

<p></p>

<p>The highlights from this approach:</p>

<ul>
<li><p>No type arguments specified during creation - everything is inferred</p></li>
<li><p>The returned type, <code>IUITableViewDiffableDataSource&lt;Game&gt;</code> couples the code to just the core model type</p></li>
<li><p>No need to derive from <code>NSObject</code>, or to modify existing types, or to prepare snapshots directly</p></li>
<li><p>Automatic typed model in <code>GetCell</code></p></li>
<li><p>Automatic change tracking! </p></li>
</ul>

<p>The potential downsides from this approach:</p>

<ul>
<li><p>Introduction of an unconstrained generic <code>NSObject</code> subclass. I am not well informed enough to decide whether that's a major issue. In an interpreter world the idea that we know all possible <code>T</code>s for a generic <code>NSObject</code> subclass at compile time is no longer valid, so maybe it's less of a problem these days</p></li>
<li><p>Additional allocations incurred during the creation of wrapper objects. Since iOS is a beast, it probably isn't a major concern. However, a modification to <code>EasyUITableViewDiffableDataSource</code> (name subject to change) could involve reuse of wrapper instances between snapshots.</p></li>
<li><p>Supported on iOS13 and above only.. enough said</p></li>
<li><p><strike> <em>(Probably one I should have mentioned earlier 💀)</em> - bugs in the diffing. There are a few odd behaviours I've noticed as I stress test this with more complicated changesets, including the ability to crash the app. Given we're seeing fairly frequent updates to iOS at the moment, it might be worth waiting a little longer before going all in on this. </strike>. As it turns out, the bugs I encountered were actually in <code>UITableView</code> proper and not in the diffable data sources. Invoking <code>Begin</code>/<code>EndUpdates</code> directly against the <code>UITableView</code> with the same set of updates (as diffable does internally) causes the crash. I found this when <a href="https://twitter.com/__breeno">Steve Breen</a> (the guy in the WWDC video! :O) drove by <a href="https://twitter.com/rdavis_au/status/1195188561143713793">my tweet about the bug</a>, asked for a repro, and then verified the issue was upstream. As someone who once found themselves in a position where they were often being blamed for bugs in upstream dependencies, I am happy to see it wasn't a bug in diffable after all. </p></li>
</ul>

<p>If you fall into the small niche of use cases in Xamarin.iOS that benefit from diffable data, I think that the upsides outweigh the downsides. If nothing else, the new data source types will make life easier in my demo apps and proof of concepts. If you're interested in giving it a try, the relevant classes are all available <a href="https://gist.github.com/rdavisau/20bfa9e46e89cf31d409f8796cc41cf1">here</a>. </p>

<p>Happy (not having to do) diffing!</p>]]></description><link>http://ryandavis.io/using-diffable-data-sources-in-xamarin-ios/</link><guid isPermaLink="false">937adc96-6d0b-4d28-88c0-27a0bac6a0ac</guid><category><![CDATA[xamarin]]></category><category><![CDATA[code]]></category><category><![CDATA[ios13]]></category><category><![CDATA[uitableview]]></category><category><![CDATA[uicollectionview]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Fri, 15 Nov 2019 03:42:00 GMT</pubDate></item><item><title><![CDATA[From Xamarin Native to Xamarin.Forms (CODE Magazine)]]></title><description><![CDATA[<p>"Native or Forms"? is one of those fundamental questions that almost every team looking to start a Xamarin-based project needs to make a call on. The question implies an either/or arrangement, and the reality is that the majority of Xamarin projects do end up being either 'all Native' or 'all Forms'. In the recently released special .NET Core 3 Focus edition of <a href="https://www.codemag.com/">CODE Magazine</a>, I wrote about a situation in which a project moved from a Xamarin Native architecture to being a hybrid of both Xamarin.Native and Xamarin.Forms using the <a href="https://docs.microsoft.com/en-us/xamarin/xamarin-forms/platform/native-forms">Native Embedding</a> feature; in a low risk, maintainable manner. The article includes an overview of Native Embedding, how it was applied to the project, and some tips and techniques for approaching an architectural migration of the Xamarin.Forms kind.</p>

<p>The magazine itself includes several other interesting articles, all with a .NET focus. It has both print and online versions, the latter of which can be accessed <a href="https://www.codemag.com/Magazine/Issue/9ab5fafc-ad26-4e2d-98e0-56d6212123bb">here</a>: <center><a href="https://www.codemag.com/Magazine/Issue/9ab5fafc-ad26-4e2d-98e0-56d6212123bb"><img src="https://www.codemag.com/Magazine/CoverLarge/9ab5fafc-ad26-4e2d-98e0-56d6212123bb" alt="" title=""></a><p> <br>
<em><small>you can find <a href="https://www.codemag.com/Article/1911092/From-Xamarin-Native-to-Xamarin.Forms-Reaping-the-Rewards-without-the-Risk">my article</a> in this issue alongside a very impressive line up of authors 😵</small></em></p></center></p>

<p>I have to give a special thanks to <a href="https://twitter.com/kphillpotts">Kym Phillpotts</a>, whose awesome Xamarin.Forms UI challenges served to demonstrate the sophistication of UI you can produce with the framework, all whilst adding a splash of colour to the article. I was also very fortunate to get a lot of help from <a href="https://twitter.com/davidortinau">David Ortinau</a>, who reviewed and provided feedback on my drafts.</p>

<p>Dave also found a copy of the article in the wild at Microsft Ignite and tweeted to let me know. Thanks Dave!</p>

<p><center> <br>
<blockquote class="twitter-tweet" data-theme="light"><p lang="en" dir="ltr">look what I found <a href="https://twitter.com/rdavis_au?ref_src=twsrc%5Etfw">@rdavis_au</a> !!! <a href="https://twitter.com/hashtag/XamarinForms?src=hash&amp;ref_src=twsrc%5Etfw">#XamarinForms</a> <a href="https://t.co/6ZZZsoC7Sx">pic.twitter.com/6ZZZsoC7Sx</a></p>&mdash; David Ortinau @ #MSIgnite (@davidortinau) <a href="https://twitter.com/davidortinau/status/1191852061052342272?ref_src=twsrc%5Etfw">November 5, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <br>
</center></p>

<p>Having conquered print, I suppose I now need to decide between film and tv next 😎</p>]]></description><link>http://ryandavis.io/native-to-forms-code-magazine/</link><guid isPermaLink="false">60aed051-963a-42b8-861e-6e722c9f065f</guid><category><![CDATA[xamarin]]></category><category><![CDATA[almost-famous]]></category><category><![CDATA[xamarin.form]]></category><category><![CDATA[native-embedding]]></category><category><![CDATA[code-magazine]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Fri, 08 Nov 2019 00:45:30 GMT</pubDate></item><item><title><![CDATA[(Some of) What's new in iOS13]]></title><description><![CDATA[<p>At the <a href="https://www.meetup.com/Queensland-based-MonoTouch-and-Mono-for-Android/">Queensland C# Mobile Developers meetup group</a>'s <a href="https://www.meetup.com/Queensland-based-MonoTouch-and-Mono-for-Android/events/264233071/">August 2019 Meetup</a>, I gave a talk entitled <em>"(Some of) What's new in iOS13"</em>, covering - appropriately - some of the new features iOS13 brings us as Xamarin developers. Given there are such a large number of additions in iOS13, I decided to focus on four areas - <strong>Dark Mode</strong>, <strong>PencilKit</strong>, <strong>ARKit</strong> and <strong>CoreML</strong>. Most of the functionality was covered in demos.</p>

<p>Slides are available at the end of the post, and  detail on each of the new areas is below. If you want to play along, you can install the <a href="https://devblogs.microsoft.com/xamarin/ios-13-xcode-11/">Xamarin.iOS / Xcode11 Preview</a> and clone the <a href="https://github.com/rdavisau/hello-ios13">sample app</a>.</p>

<h3 id="darkmode">Dark Mode</h3>

<p><small>(<strong>WWDC Reference:</strong> <a href="https://developer.apple.com/videos/play/wwdc2019/214/">Implementing Dark Mode on iOS</a>)</small> <br>
As you have probably heard, iOS13 ships with a new 'dark mode', in the form of a system-wide theme setting that is observed by all Apple apps and various aspects of UIKit. This themeability is mostly enabled by new dynamic behaviour in <code>UIColor</code> and in image assets - specifically, the ability for these to take on different appearances depending on whether dark mode is enabled or not. Apple's semantic colours (e.g. <code>UIColor.LabelColor</code>, <code>UIColor.DarkTextColour</code>) are all dynamic in iOS13, which means they automatically change based on the user's selection. A number of new semantic colours have been added in iOS13, some of which you can see below. </p>

<p><img src="http://ryandavis.io/content/images/2019/08/semantic-colours.png" alt=""></p>

<p>You can easily determine which colours on the <code>UIColor</code> class are dynamic, as they have the -<code>Color</code> suffix. For example, <code>UIColor.SystemGreenColor</code> is dynamic, whereas <code>UIColor.Green</code> is not.</p>

<p>By targeting iOS13, your app also opts in to themeability, which means that parts of it will begin responding automatically to changes - the parts that were already referring to semantic colours, either explicitly in your code, or by default (e.g. a label that hasn't had its text colour modified). It's likely that an existing app will not be using dynamic colours throughout, so there may be not-insubstantial effort involved in making it work cleanly in both modes. Although Apple talks a lot about defining dynamic colours in your asset catalog, dynamic colours can also be defined programmatically, which may simplify migrations depending on your apps architecture. This is achieved by creating a <code>UIColor</code> using a new constructor that takes an <code>Action&lt;UITraitCollection&gt;</code> that iOS will call whenever the theme changes; in this action you can test for the current theme and return the appropriate colour. It sounds complicated but is simple in practice:</p>

<p><img src="http://ryandavis.io/content/images/2019/08/custom-dynamic-colours-1.png" alt="Defining custom dynamic colours programmatically"></p>

<p>In iOS13 Dark Mode can be toggled by the user (relatively) easily via Control Center, so your app should be ready to have the theme changed while it is running - it's not enough to check the setting at startup. If you need to perform arbitrary work on a screen in response to a theme change, you can do so by overriding the <code>TraitCollectionDidChange</code> method on <code>UIViewController</code> (which could be your base viewcontroller or Xamarin.Forms PageRenderer), and making changes based on the new theme setting. For example, in the below demo (which represents me and my feelings on non-dark mode apps) we replace images, change text, and toggle animations when notified that the theme has changed in  <code>TraitCollectionDidChange</code>.</p>

<p><center>  </center></p>

<video controls autoplay loop width="100%">  
    <source src="https://ryandavis.io/content/images/2019/08/excuse-me-turn-that-off.mp4" type="video/mp4">
</video>  

<p></p>

<p>If you want to target iOS13 but aren't up to making the changes for theme support,  you can opt your app out of dynamic behaviour by setting the <code>OverrideUserInterfaceStyle</code> on a view-like element to <code>UIUserInterfaceStyle.Light</code> or <code>UIUserInterfaceStyle.Dark</code>. Your <code>UIWindow</code> is a good place if you want it to apply to the entire app. Make sure to <code>CheckSystemVersion</code> before you do, as the selector is not present on devices below iOS13. </p>

<p>Currently Apple is not requiring apps to support dark mode, but as the OS and built in apps support it, it will be interesting to see whether users expect it. When updating to iOS13 or setting up a new iOS13 device, users are explicitly asked to select between light and dark theme, so they will be aware of the feature.</p>

<p>In the Xamarin.Forms space, Gerald has put together informing and entertaining proposal/specification for handling the new appearance considerations for iOS <a href="https://github.com/xamarin/Xamarin.Forms/issues/7304">here</a>. It also begins to consider the equivalent theming coming in Android 10.</p>

<h3 id="pencilkit">PencilKit</h3>

<p><small>(<strong>WWDC Reference:</strong> <a href="https://developer.apple.com/videos/play/wwdc2019/221/">Introducing PencilKit</a>)</small> <br>
PencilKit is a high performance drawing framework that makes it super easy to give users of your app a sophisticated but familar drawing environment, with a very small amount of code. As the name suggests, the framework is optimised for use with Apple Pencil (which is quite the feat of engineering - watch the WWDC video for more information), but also works fine for users providing direct touch input. Two core pieces of PencilKit that you're likely to interact with are <code>PKToolPicker</code> and <code>PKCanvasView</code>.</p>

<h5 id="pktoolpicker">PKToolPicker</h5>

<p>The <code>PKToolPicker</code> class lets you control the display of a toolbox UI for drawing tasks.</p>

<p><img src="http://ryandavis.io/content/images/2019/08/pktoolpicker.png" alt="Various parts of the PKToolPicker UI, including the full panel, colour picker, and the interface that displays when you select a tool"></p>

<p>You might think the UI looks familiar - that's because it's made up of the same core set of tools that you have access to for markup in other Apple apps like Photos. <code>PKToolPicker</code> gives you colour and drawing tool pickers, a ruler, a lasso selection tool and an eraser. Additionally, it includes a built in undo/redo functionality, and the toolbox can be moved around the screen and docked to edges by the user. All of this functionality comes for free - no need for you to write any code.</p>

<h5 id="pkcanvasview">PKCanvasView</h5>

<p><code>PKCanvasView</code> is (as you might expect) a class that represents a canvas onto which the user can draw. It's a <code>UIView</code> subclass (more precisely, a <code>UIScrollView</code> subclass) so can be added to your normal view hierarchy at any arbitrary size, which may be smaller or larger than the actual content size. You'll generally use <code>PKCanvasTool</code> with <code>PKToolPicker</code> and can set up a sophisticated drawing environment with as few lines of code as below:</p>

<p><img src="http://ryandavis.io/content/images/2019/08/how-2-art.png" alt="image of code from the sample app that shows how to set up the canvas"></p>

<p><code>PKCanvasView</code> takes input from a user directly or via an Apple pencil and maintains a <code>PKDrawing</code> representing the drawn content. Although it's possible to get a bitmap representation of drawn content out of <code>PKDrawing</code>, internally it maintains a vector-like representation. This allows for several nice behaviours, including smart selection/smart modification, and automatic recolouring. For example, in the below video (slightly sped up) the lasso tool locks to the sun's rays and allows them to be dynamically recoloured and moved:</p>

<video controls autoplay loop width="100%">  
    <source src="https://ryandavis.io/content/images/2019/08/hot-sun.mp4" type="video/mp4">
</video>

<p>By default, <code>PKCanvasView</code> automatically recolours its content in response to theme changes. For example, the below attempt at a Xamagon recolours based on the selected theme:</p>

<p><img src="http://ryandavis.io/content/images/2019/08/recolour-xamagon.png" alt="image of the same xamagon I tried to draw in PKCanvasView when being viewed with Dark theme enabled/disabled">
If you add a delegate to your <code>PKCanvasView</code>, you can be notified of changes to the drawing as the user makes them. You can get an bitmap representation of of a <code>PKDrawing</code> using the <code>GetImage</code> method. For example, in the below demo (sped up), changes to a drawing are used to create a tiling background, and each subsequent image is layered and animated in a different direction, giving a space-like effect:</p>

<video controls autoplay loop width="100%">  
    <source src="https://ryandavis.io/content/images/2019/08/xtarry-xky-xpeedy.mp4" type="video/mp4">
</video>

<p>In an <a href="https://ryandavis.io/how-to-have-your-ios-13-preview-cake-and-emit-it-too/">earlier post</a> I also demonstrated using <code>PKCanvasView</code> with ARKit and the demo app includes a simplified version of that (no repl required :))</p>

<p>Although these demos have mostly been focused on creating new drawings, <code>PKCanvasView</code> can also be used for markup, by giving it a transparent background and placing it over other content.</p>

<h3 id="arkit3">ARKit3</h3>

<p><small>(<strong>WWDC Reference:</strong> <a href="https://developer.apple.com/videos/play/wwdc2019/604/">Introducing ARKit3</a>)</small> <br>
ARKit3 is the third iteration of Apple's augmented reality (AR) framework for mobile devices. In this sense, it is more evolution than revolution, but still includes a large number of welcome improvements across a range of areas, including increased performance and accuracy, enhancements to multiuser AR and a new record/replay capability. Several new AR tasks are now supported, including body tracking, people occlusion, multi-camera tracking and automatic coaching.</p>

<p>Demonstrating AR features can pose practical challenges, so I focussed on a few small demos.</p>

<h5 id="arcoachingoverlayview">ARCoachingOverlayView</h5>

<p>A good AR experience begins with having good tracking data and anchors. <code>ARCoachingOverlayView</code> is a new ARKit feature provided by Apple that allows you to embed an automated, consistent, guide-like overlay into your AR experiences, to help users orient themselves and ARKit correctly.</p>

<p>You make use of the coach by linking an <code>ARCoachingOverlayView</code> to your <code>ARSession</code> and providing it an <code>ARCoachingGoal</code>. Whenever the coach detects that the goal is not met, it will automatically display and guide the user towards the outcome.</p>

<p><img src="http://ryandavis.io/content/images/2019/08/arcoach-display.png" alt="Two of the guide views that will display when the VerticalPlane goal is set"></p>

<p><code>ARCoachingOverlayView</code> can be given a delegate, which will call back in response to events such as activation or deactivation of the coach. You can respond to these in order to update your interface (e.g. remove distractions) to help the user focus on tracking.</p>

<p>Since the coach will presumably be used by Apple's own AR application as well as other third parties, I guess the idea is that users will become familiar with this sort of onboarding.</p>

<h5 id="peopleocclusion">People Occlusion</h5>

<p>With the new person segmentation capability, ARKit3 is able to detect people in a frame and their distance from the camera ("depth"), in order to have people and virtual content occlude each other appropriately. The below demo demonstrates this new capability, first with segmentation disabled (to convey the problem) and then with segmentation enabled. Note that segmentation generally performs better for people (and larger parts of people, like bodies) that are further from the camera than what I demonstrate here.</p>

<p><center>  </center></p>

<video controls autoplay loop width="100%">  
    <source src="https://ryandavis.io/content/images/2019/08/segmentation.mp4" type="video/mp4">
    
</video>  

<p></p>

<p>Enabling segmentation is as easy as setting the appropriate flag on your <code>ARWorldTrackingConfiguration</code>: </p>

<p><img src="http://ryandavis.io/content/images/2019/08/enable-segmentation.png" alt="Code snapshot showing possible FrameSemantic values"></p>

<p>You can also access estimated depth and segmentation data from ARKit with each frame. In the above video this was simply displayed towards the top of the screen, but if you are clever there might be other things you can do with it.</p>

<h5 id="multicameratracking">Multi-camera tracking</h5>

<p>In previous versions, ARKit has been limited to using a single camera at a time for AR purposes. Most tasks, such as world tracking, image and object detection, and image tracking, are all performed using the front camera. Face tracking, relies on technologies present in the front camera only. In the past, you would need to use an <code>ARFaceTrackingConfiguration</code> to perform face tracking, meaning that performing sophisticated world tracking at the same time was off the table. In ARKit3, you can now make use of both cameras simulateneously during AR work, making it possible to combine front and back camera tasks. <code>ARWorldTrackingConfiguration</code> has a new property, <code>UserFaceTrackingEnabled</code> which when set causes the front camera to provide face tracking input to the AR session.</p>

<p><img src="http://ryandavis.io/content/images/2019/08/world-tracking-face.png" alt="code snapshot displaying the new UserFaceTrackingFlag"></p>

<p>You are notified of face anchors in the same way that you are when using an <code>ARFaceTrackingConfiguration</code>, so code you have written for face tracking purposes can still be used here. Combining face tracking with other AR might be useful for allowing the user's expressions or facial movement to influence the scene. Or, you could use it to create creepy floating heads that match your own movement in 3D space. The possibilities are endless.</p>

<p><img src="http://ryandavis.io/content/images/2019/08/many-faces.jpg" alt="Image of creepy floating heads matching my movement in 3D space"></p>

<h5 id="honourablementionrealitykit">(Honourable Mention) RealityKit</h5>

<p>RealityKit is a new 'single-experience-focused' (my words) framework for AR. Compared to the typical arrangement of ARKit + SceneKit, RealityKit provides a simplified API and a various constraints that make creating AR experiences easier. RealityKit the framework is supported by a new application, 'Reality Composer' which provides a GUI for defining RealityKit projects and can be used on macOS or on iOS devices. The RealityKit APIs do not appear to be bound in current Xamarin.iOS previews, maybe because they are Swift-only. I wonder if there will be an answer for this by release. </p>

<h3 id="coreml3">CoreML3</h3>

<p><small>(<strong>WWDC Reference:</strong> <a href="https://developer.apple.com/videos/play/wwdc2019/704/">Core ML 3 Framework</a>)</small> <br>
Like ARKit, Apple's CoreML framework comes with a bunch of improvements, many of which I cannot claim to understand completely. </p>

<ul>
<li><p><strong>Protocol extensions</strong>: CoreML3 brings with it version 4 of the CoreML protocol, which includes support for several new model types and a major bump in the number of neural network layer types, making it a compatible conversion target for more external modes.</p></li>
<li><p><strong>On-device model personalisation</strong>: A certain subset of CoreML model types can now be marked as updatable. An updatable model can be deployed with your app, augmented with new examples collected from the user, and retrained in situ, without need for connectivity or external services, and without data leaving the user's device. </p></li>
<li><p><strong>Improvements to CoreML tooling</strong>: Both CreateML (Apple's GUI/Wizard-based model training tool), and Turi Create (Apple's python-based model training library) have received several enhancements, in the talk I looked at the former. </p></li>
</ul>

<h5 id="easysoundclassifiertrainingwithcreateml">Easy Sound Classifier training with CreateML</h5>

<p>A new Sound Classifier wizard has been added to CreateML, making it easy to train CoreML models that can categorise audio. To demonstrate this, I used the wizard to train a model that could recognise categories of sound from the <a href="https://www.kaggle.com/c/freesound-audio-tagging">Freesound General-Purpose Audio Tagging Challenge</a>. The challenge dataset included approximately 9,000 training examples across 41 categories (applause, finger clicking, keys jangling, barking and many others). With a little preprocessing (generating the folder/file structure that CreateML expects), this dataset dropped straight into CreateML for training without any issues. </p>

<p><img src="http://ryandavis.io/content/images/2019/08/model-data.png" alt="image demonstrating train/test folder structure and how it maps to createml"></p>

<p>Training the model took about two hours on my sacrificial Catalina MBP 2016 and evaluated with reasonable, but not incredible, results. Later I read that about 60% of the training labels in the dataset have not been manually verified. Mislabelled data would influence the quality of the model and evaluation metrics. </p>

<p><img src="http://ryandavis.io/content/images/2019/08/model-eval.png" alt="image showing the evaluation results of the trained model"></p>

<p>To improve the model, I ended up adding an additional 'Ryan' category, trained on audio from my <a href="https://youtu.be/lRPzisWWats">Introduction to ARKit talk at the Melbourne Xamarin Meetup</a>. As we'll see, it did a pretty good job at detecting me.</p>

<h5 id="liveaudiorecognitionwithsoundanalysis">Live Audio Recognition with SoundAnalysis</h5>

<p>SoundAnalysis new framework in iOS13 that simplifies the process of using a CoreML model for sound classification. It takes a trained model (see previous section) and an audio source (either samples, or streaming/recorded audio), and attempts to classify the audio using the model. </p>

<p>Although the documentation from Apple is currently light on, it's fairly straightforward to use a trained model with SoundAnalysis to classify live audio. The feature is a collaboration between <code>SNAudioStreamAnalyzer</code> which performs analyis based on your model, and <code>AVAudioEngine</code> which provides the input (e.g from the microphone). A <code>DidProduceResult</code> callback gives you access to classification data.</p>

<p><img src="http://ryandavis.io/content/images/2019/08/soundanalysis.png" alt="code snapshot demonstrating set up of an audio classifier"></p>

<p>The results are an <code>NSArray</code> of <code>SNSoundClassificationResult</code>s that are essentially pairs of possible classification and confidence, an example printed to the console below:</p>

<pre><code>{
  "Finger_snapping": 0.77,
  "Scissors": 0.16,
  "Bus": 0.02,
  "Computer_keyboard": 0.02,
  "Bark": 0.01
}
</code></pre>

<p>I found the trained model tended produce a lot of false positives (poor precision), but it's worth noting that the dataset was tailored towards sample classification, not neccessary classification of streaming audio. You can see the final model in use below: <br>
</p><p style="color:red"><strong>warning: contains very low quality audio recorded by laptop mic <br>
<small> also me attempting to bark like a dog</small></strong> <br>
<small><smaller> you were warned</smaller></small> <br>
</p><center>  <p></p>

<video controls width="100%">  
    <source src="https://ryandavis.io/content/images/2019/08/sound-classifier-t.mp4" type="video/mp4">
   </video></center> 
  

<p></p>

<p>This was a 'good' take, and the model did not always perform as well as I'd like. By increasing the threshold for 'positive classification' and potentially smoothing predictions (e.g. only consider a sound classified if receiving multiple sequential confident classifications in a row), it should be possible to reduce the false positive rate.</p>

<h2 id="demoapp">Demo App</h2>

<p>The "Hello iOS13" app containing demos is on Github: <a href="https://github.com/rdavisau/hello-ios13">rdavisau/hello-ios13</a>.</p>

<p><img src="https://ryandavis.io/content/images/2019/08/menu-mini-mini.jpeg" alt="menu screen"></p>

<p><center><small>(yeah it's <a href="https://github.com/rdavisau/ar-bound">ar-bound</a> with different images).</small></center></p>

<h2 id="slides">Slides</h2>

<p>Links for the slides are below. </p>

<p>Slides (31): <a href="https://ryandavis.io/content/images/2019/08/Some_of_What_s_new_in_iOS13_-_Ryan_Davis_20190827.pdf">PDF</a></p>

<table>  
<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide1.PNG" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide5.PNG" alt="">
</td>  
</tr>

<tr><td width="373">  
<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide7.PNG" alt="">
</td>

<td width="373">

<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide10.PNG" alt="">
</td>  
</tr>

<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide12.PNG" alt="">
</td>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide13.PNG" alt="">
</td>  
</tr>

<tr><td width="373">  
<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide18.PNG" alt="">
</td>

<td width="373">

<img src="http://ryandavis.io/content/images/2019/08/whatsnew/Slide19.PNG" alt="">
</td>  
</tr>

</table>]]></description><link>http://ryandavis.io/some-of-whats-new-in-ios13/</link><guid isPermaLink="false">298011b9-b0b3-4f74-8a39-2eedbd482857</guid><category><![CDATA[xamarin]]></category><category><![CDATA[almost-famous]]></category><category><![CDATA[arkit]]></category><category><![CDATA[ios13]]></category><category><![CDATA[coreml]]></category><category><![CDATA[pencilkit]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Fri, 30 Aug 2019 00:00:00 GMT</pubDate></item><item><title><![CDATA[How to have your Xamarin.iOS 13 preview cake and Emit it too]]></title><description><![CDATA[<p>As you might have read, Xamarin <a href="https://devblogs.microsoft.com/xamarin/ios-13-xcode-11/">recently released the first of its Xcode 11 previews</a>, which provide early access to in-progress Xamarin bindings for the new iOS and macOS SDKs. These are useful if you want to start working with new and updated frameworks like <a href="https://developer.apple.com/sign-in-with-apple/">Sign in with Apple</a>, <a href="https://developer.apple.com/documentation/pencilkit">PencilKit</a>, <a href="https://developer.apple.com/documentation/soundanalysis">SoundAnalysis</a> and <a href="https://developer.apple.com/augmented-reality/arkit/">ARKit3</a> - all of which except for the latter are baked into the first preview. Even if you don't want to work with new frameworks, these previews let you test how your apps behave when targeting iOS13 - so you can find out whether changes like <code>UIViewController.ModalPresentationStyle</code>'s defaulting to <code>UIModalPresentationStyle.Automatic</code> are good things or a bad things for you (<a href="https://github.com/rdavisau/ar-bound">ARBound</a>, I'm looking at you 😾). </p>

<p>There's plenty of benefit to trying out the previews, but like every non-<code>Xamarin.iOS 12.7.1.x</code> build they suffer from one drawback - no <code>Reflection.Emit</code>!. You might be thinking - <em>"What good is a preview release if I can't hot reload new PencilKit features in ARKit3 from an embedded REPL?!?"</em> - which is exactly what I thought too. Don't worry though, if we're happy to get our our hands dirty, we can bake our own Xamarin.iOS Xcode 11 Preview with <code>Reflection.Emit</code> available, and even skip ahead of the official preview release cycle while we're at it! </p>

<h5 id="doineedtobotherwiththis">Do I need to bother with this?</h5>

<p>Maybe not - particularly if your use case isn't the aforementioned 'hot-reloading new PencilKit features in ARKit3 from an embedded REPL'. If you want to beat the preview release cycle, or 'just' want to use the interpreter, you're good. As <a href="https://github.com/dalexsotomentioned">Alex Soto</a> mentioned on the <a href="https://channel9.msdn.com/Shows/NET-Community-Standups/Xamarin-NET-Community-Standup-July-3rd-2019-iOS-13-Preview-with-the-iOS-Team">last community standup</a> (a great watch on if you want to learn more about how Xamarin.iOS is put together), you can go to the <a href="https://github.com/xamarin/xamarin-macios/tree/xcode11"><code>xcode11</code> branch of xamarin/xamarin-macios</a>, and download the builds from any recent commit. Per the standup, the commits in this branch have been through some internal review and the test suites before merging - so although the builds are not official previews, the quality should be reasonable. </p>

<p><img src="https://ryandavis.io/content/images/2019/07/how2getabuild.png" alt="" title=""> <center><em><small>Save yourself a lot of trouble by deciding you don't need SRE and just downloading one of these</small></em></center></p>

<p>So how to decide? A month or so ago I gave a talk on <a href="https://ryandavis.io/practical-uses-for-the-mono-interpreter/">Practical Uses for the Mono Interpreter</a>, and included a separation of the what needs interpreter and what needs SRE:</p>

<p><img src="https://ryandavis.io/content/images/2019/05/interp/Slide44.PNG" alt=""></p>

<p>Activities on the left side rely on "using the interpreter to 'execute' IL (i.e. non AOT'd code)". This is what you get from <code>--interpreter</code> and is possible in all Xamarin.iOS builds today, including the previously mentioned xcode11 branch builds. Activities on the right side rely on "generating IL, then using the interpreter to 'execute' it", which requires <code>Reflection.Emit</code> is left in the Mono runtime. Alongside the original interpreter announcement, the Xamarin team made sure the <code>Xamarin.iOS 12.7.1.x</code> series of builds included the appropriate bits. IL generation is a mechanism used by hot reload tools like Continuous, and is also useful for enabling REPL-like environments and arbitrary runtime code execution. If you want to play with these kind of things - <em>on the device</em> then yes, you do need to make your own build and should keep reading. </p>

<h3 id="bakingaxamariniosversionwithreflectionemitsreincluded">Baking a Xamarin.iOS version with Reflection.Emit (SRE) included</h3>

<p>Building your own version of Xamarin.iOS sounds scary, but it's actually pretty straightforward. It's also <a href="https://github.com/xamarin/xamarin-macios/wiki/Build-&amp;-Run">well documented</a> on the repo wiki, so besides the two changes needed to enabled SRE, I won't be telling you <em>too</em> much more than what you can find there - but I will give you the step by step. </p>

<p>It probably goes without saying, but <strong>don't use your custom build of Xamarin.iOS for anything important</strong>. You almost certainly don't want to cut release app builds with it - so if you put this on a machine that also acts as a build agent, make sure to take it out of the queue. You can undo the changes we make here by rolling back to a Stable channel build in VS4Mac, but I'm not sure how thorough of a rollback that is (it could be fine, I just don't know) - so <strong>continue at your own risk</strong>. </p>

<p>Depending on how many of the prerequisites you have, how fast your internet is, and how fast your machine is, I'd expect below steps to take between one and three hours. </p>

<p><strong>Step 1: Have a Mac running Mojave</strong></p>

<p>Catalina drops 32-bit application support and won't run Xcode 9.4. But, Xamarin still relies on/supports a few 32 bit pieces. Xamarin is planning on deprecating all 32-bit support in the true Xamarin.iOS 13.x releases, so this restriction will only be in place during the preview period. The upshot is you need to create your build on Mojave at the moment - don't waste time like I did trying to work around it.</p>

<p><center><img src="http://ryandavis.io/content/images/2019/07/no-catalina-allowed.png" alt="" title=""></center><center><em><small>As if Apple weren't already punishing you enough for using early macOS betas</small></em></center></p>

<p><strong>Step 2: Download all the Xcodes</strong></p>

<p>If you're already doing Xamarin.iOS work, you probably at least have Xcode 10 now. We don't need that where we're going, but we do need Xcode 9.4 and Xcode 11 beta X - where X matches the beta release you're targeting. X is is determined by Apple's releases, the Xamarin Mac/iOS team  progress and which xamarin-macios commit you build off. At the time of writing, the latest commits in <code>xcode11</code> are targeting Xcode 11 beta 3. If you want to go back to an earlier commit (e.g. the one that matches the current first preview), you'd need beta 2. </p>

<p>I like to use <a href="https://github.com/xcpretty/xcode-install">xcode-install</a> to get different versions of Xcode, but it can't handle the current betas. Rather than dig around the Apple Developer portal for the download links, I recommend using <a href="https://xcodereleases.com/">https://xcodereleases.com</a> to quickly get the versions you need.</p>

<p>Once you have both builds, make sure they end up in <code>Applications</code> suffixed with the version. For example:</p>

<ul>
<li><strong>Xcode 9.4</strong> - Xcode94.app</li>
<li><strong>Xcode 11 Beta 3</strong> - Xcode11-beta3.app </li>
</ul>

<p>Your Applications folder might look a bit like this:<center><img src="http://ryandavis.io/content/images/2019/07/many-xcodes.png" alt="" title=""><center><em><small>Since we don't need to work with Xcode 10, we don't need to rename it</small></em></center></center></p>

<p><strong>Step 3: Clone xamarin-macios</strong></p>

<p>In Terminal, navigate to a place you like to put repos (I like <code>~/Source</code>) and clone <code>xamarin/xamarin-macios</code>:</p>

<p><code>git clone --recursive https://github.com/xamarin/xamarin-macios</code></p>

<p>Wait some time while the repo clones, then cd in to the folder, and change to the <code>xcode11</code> branch, OR checkout the specific commit that you're interested in. </p>

<pre><code>cd xamarin-macios  
git checkout xcode11  
</code></pre>

<p>(If you wanted to build a preview that matches the first officially released preview but with SRE, you'd checkout <code>903c3eec31bc7c9f819f958df5f2a567b2d30631</code>)</p>

<p><strong>Step 4: Set up dependencies</strong> </p>

<p>This is pretty much taken straight from the build guide. The xamarin-macios repo has a handy script called <code>system-dependencies.sh</code> which checks for the presence of a build-friendly environment and can automatically resolves some issues, or suggests the resolution. I usually run with the <code>--provision-all</code> flag to get this outcome: </p>

<p><code>./system-dependencies.sh --provision-all</code></p>

<p>If it's your first time building for the preview, it's  likely that you'll need to run <code>xcode-select</code> to point at your preview Xcode. The <code>system-dependencies.sh</code> script will give you the exact invocation, so you can copy it from there. Again straight from the guide, you also need a few other dependencies which can be installed using HomeBrew. If you don't already have HomeBrew you can use this command to install it (the official HomeBrew page also directs you to install it in this manner):  </p>

<pre><code>ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"  
</code></pre>

<p>Then install the needed bits:  </p>

<pre><code>$ brew update
$ brew install libtool autoconf automake bison flex cmake
</code></pre>

<p>With those installed, we're ready to start making our changes.</p>

<p><strong>Step 5: Update tools/mtouch/mtouch.cs</strong> (optional, for now)</p>

<p>As I mentioned earlier, there are two changes we need to make to get SRE powers. One is in the configuration of our Mono build, which we'll look at shortly. The other is in the argument handling of <code>mtouch.cs</code>, which currently prevents us from passing <code>--enable-repl</code> to device builds and builds with the linker enabled.  </p>

<p><img src="http://ryandavis.io/content/images/2019/07/let-me-enable-repl.png" alt="screenshot shows ~4 lines/2 'paragraphs' of checks in mtouch.cs related to the '--enable-repl' flag that were to be removed. It seems the file has changed a little since the blog post so this may not be relevant any longer."></p>

<p>These restrictions make sense in an AOT only world, but when we have interpreter powers they get in our way, so we can remove these lines. I say this step is optional because the handling of the <code>--interpreter</code> flag currently automatically adds <code>--enable-repl</code>. However, it was not this way prior to the <code>12.7.1.x</code> series of builds, and it may change again in the future - strictly you don't need <code>--enable-repl</code> to make use of the interpreter and may prefer to also remove that additional logic from the <code>--interpreter</code> flag. In any case, I've included this step in case it becomes neccessary in the future. </p>

<p><strong>Step 6: Build with SRE in</strong> (technically, without SRE out)</p>

<p>At this point, we're more or less ready to go. All we have to do is remove the removal of SRE. There are probably many ways that this can be done; my approach is to take a sledgehammer to <strong>external/mono/configure.ac</strong>. Among other things, this file determines which defines are set on the mono build, including <code>DISABLE_REFLECTION_EMIT</code>, which we can see <a href="https://github.com/mono/mono/search?q=DISABLE_REFLECTION_EMIT&amp;unscoped_q=DISABLE_REFLECTION_EMIT">is used across various files</a> to bake out the capability.</p>

<p>The heavy handed solution is to just remove the tests that apply the flags - regardless of what is being built - at the lines that look like this:</p>

<p><img src="http://ryandavis.io/content/images/2019/07/disable-reflection-emit.png" alt="screenshot shows approximately 10 lines/two 'paragraphs' of configure.ac beginning with the line 'if test &quot;x$mono<em>feature</em>disable<em>reflection</em>emit&quot; = &quot;xyes&quot;. both 'paragraphs' should be removed" title=""><center><em><small>A Local Xamarin Developer Removed These Nine Lines - You Won’t Believe What Happened Next!</small></em></center></p>

<p>The challenge is that when we call <code>make world</code>, the <code>external/mono</code> submodule will be reset to a specific commit, so changing it before then will have no effect. There is probably a better way to do it, but since we're only doing it once, my approach is to just update the file after the submodule has been reset. To do that:</p>

<ul>
<li>run <code>make world</code></li>
<li>wait until you see that <code>external/mono</code> has been checked out (the first time this will take a while)
<img src="http://ryandavis.io/content/images/2019/07/z-z-z.png" alt=""></li>
<li>press <code>Control+Z</code> to pause the script and return to the prompt</li>
<li>make the change to <strong>external/mono/configure.ac</strong></li>
<li>run <code>fg</code> to resume execution.</li>
</ul>

<p>Now we're good to go. </p>

<p>Running <code>make world</code> takes a l o o o n g time, so be patient. I've found that from time to time it fails when running tests, but that is not necessarily fatal, and a subsequent <code>make all -j8</code>  usually succeeds. Once the build succeeds, two commands (again straight from the build guide) will get your new Xamarin.iOS build installed:</p>

<pre><code>make fix-install-permissions # probably only needed the first time  
make install-system  
</code></pre>

<p>Seeing something like this tells you it should have worked!</p>

<p><img src="http://ryandavis.io/content/images/2019/07/look-mom-it-worked.png" alt=""></p>

<p>If you want to go back, 'updating' back to a stable channel build using VS4Mac should do the trick. I'm not sure whether it is a completely clean rollback though.</p>

<p><strong>Step 7: Try it out!</strong></p>

<p>If you made it this far, congratulations! You're ready to try your new SRE-powered build. First you should check that everything looks hunky dory in the About window. Note that if you provisioned all as instructed, you may find you now have a Visual Studio For Mac (Preview) app, and should probably use that. <br>
You can tell that your version is installed by checking the build date (should be today if you built it today), and that the hash matches the commit hash you built against. <img src="http://ryandavis.io/content/images/2019/07/we-did-it.png" alt="" title="">You can now try using the <a href="https://github.com/spouliot/interpreter">interpreter samples</a>, or techniques I've talked about in previous posts. And of course, you can use them with new iOS13 features, like PencilKit and ARKit3. So I finally do get my REPL Arkit PencitKit extravagana:</p>

<p><center> <br>
<video controls autoplay loop width="100%"> <br>
    <source src="https://ryandavis.io/content/images/2019/07/arkit-pencilkit.mp4" type="video/mp4">
   </video></center> 
<center><em><small>Yeah drawing is not my strong suit</small></em></center></p>]]></description><link>http://ryandavis.io/how-to-have-your-ios-13-preview-cake-and-emit-it-too/</link><guid isPermaLink="false">fecb4259-4df9-410d-96bb-db6398f482e0</guid><category><![CDATA[xamarin]]></category><category><![CDATA[code]]></category><category><![CDATA[interpreter]]></category><category><![CDATA[reflection-emit]]></category><category><![CDATA[ios13]]></category><category><![CDATA[preview]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Wed, 10 Jul 2019 21:00:00 GMT</pubDate></item><item><title><![CDATA[Declarative Code-Based Xamarin.Forms UI using CSharpForMarkup and Continuous]]></title><description><![CDATA[<p>When it comes to UI for Xamarin.Forms, there's no denying that using XAML to create them (rather than code) is the widespread community preference. Sample a set of posts from <a href="https://www.planetxamarin.com/">Planet Xamarin</a> or submissions for the recent <a href="https://ryandavis.io/xamarin-forms-4-0-challenge-submissions/">Visual and CollectionView challenges</a> and you'll find examples of coded UI are far and few between. In a recent talk (see end of post) I presented a somewhat tongue-in-cheek comparison of code and XAML popularity in the aforementioned challenges:</p>

<p><img src="http://ryandavis.io/content/images/2019/06/xaml-v-code.png" alt="">
<center><em><small>No prizes for guessing who the one person is 👀</small></em></center></p>

<p>Still, discussion regarding code and XAML occurs quite frequently in various arenas. Lately, the presence of alternatives like Flutter and Apple's recent SwiftUI announcement have brought more attention to the topic, as well as increased the discussion around alternate architectures like MVU. I watch these with interest but I find my motivations are simpler - I write my Forms UI in code because before that I was writing iOS UIs in code, and I like to use <a href="https://github.com/praeclarum/Continuous">Continuous</a> for hot reload. When I started introducing Xamarin.Forms into my projects using <a href="https://docs.microsoft.com/en-us/xamarin/xamarin-forms/platform/native-forms">Forms Embedding</a>, continuing to use code was a straightforward decision. </p>

<h4 id="atwitchstream">A twitch stream</h4>

<p><a href="https://twitter.com/davidortinau">David Ortinau</a> spotted the use of coded UI in the Xamarin.Forms UI challenges and invited me to guest on his Twitch stream and demonstrate what the experience of writing your UI in code can be like. You can watch back the stream, in which I walked through building out the login/signup page for Xappy <a href="https://www.youtube.com/watch?v=Mw2F1aHY0tQ">here</a>.</p>

<p>There were two helpers I used in the stream to improve quality of life when writing UI code, <a href="https://twitter.com/vincenth_net">@vincenth_net</a>'s <a href="https://github.com/VincentH-Net/CSharpForMarkup">CSharpForMarkup</a> and <a href="https://twitter.com/praeclarum">@praeclarum</a>'s <a href="https://github.com/praeclarum/Continuous">Continuous</a>, which I'll talk a little about here.</p>

<h4 id="csharpformarkup">CSharpForMarkup</h4>

<p>Using C# to build Xamarin.Forms UIs directly can be a little awkward, and one of the projects I quickly came across when first starting out with code was <a href="https://github.com/VincentH-Net/CSharpForMarkup">CSharpForMarkup</a> - a set of helpers that aims to let you use a declarative style of C# code for Xamarin Forms UI. CSharpForMarkup wraps and abstracts Xamarin.Forms UI actions that are unweildy to handle in code, allowing you to write concise, readable, declarative and maintainable user interfaces in C#. Based on your preference, you can use the helpers to produce 'code that looks like XAML', or you can produce code that favours conciseness whilst still remaining readable. This example from the readme demonstrates that idea a little: <img src="http://ryandavis.io/content/images/2019/06/decl-styles.png" alt="" title=""> <br>
Being fully contained within a single .cs file of helper methods, CSharpForMarkup is easily integrated into a new or existing project. You can use them 'just' as helper methods, or you can apply the techniques and practices described on the readme page to help drive a consistent code style across your project. One major peril of writing coded UI is the fact that with code 'anything is possible', including overly clever or specialised solutions. With CSharpForMarkup, you can apply and enforce a consistent, coded-based style for your UI development, and produce code that makes you feel good writing it.</p>

<p><center>  </center></p>

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">If you have only created UIs in <a href="https://twitter.com/hashtag/XamarinForms?src=hash&amp;ref_src=twsrc%5Etfw">#XamarinForms</a> using XAML, you should check out <a href="https://twitter.com/rdavis_au?ref_src=twsrc%5Etfw">@rdavis_au</a> code up UIs with <a href="https://twitter.com/vincenth_net?ref_src=twsrc%5Etfw">@vincenth_net</a>’s CSharpForMarkup (<a href="https://t.co/woDFSrwZDG">https://t.co/woDFSrwZDG</a>). According to Ryan “It’s the kind of thing that will make you feel good and happy about what you are doing.” <a href="https://t.co/HSMpnaadit">https://t.co/HSMpnaadit</a></p>&mdash; Michael Stonis (@MichaelStonis) <a href="https://twitter.com/MichaelStonis/status/1141906024548491264?ref_src=twsrc%5Etfw">June 21, 2019</a></blockquote>  

<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>  

<p></p>

<p>As you might guess, use of CSharpForMarkup was a key focus of the stream. Afterwards, Vincent offered to give the code the full CSharpForMarkup treatment - with all the conventions he recommends - and the final result now <a href="https://github.com/davidortinau/Xappy/tree/master/Xappy/Xappy/Content/Scenarios/Login">lives in Xappy</a>. One thing I also did on the stream is demonstrate a "DSL"-style of UI specification, which relied on dedicated helper methods to reduce the amount of boilerplate in the class. Although I tend to think the declarative style is the right balance of consistency / conciseness / maintainability, in specific cases you might find a DSL style justified. </p>

<p><img src="http://ryandavis.io/content/images/2019/06/decl_dsl.png" alt=""></p>

<h4 id="continuous">Continuous</h4>

<p>Continuous is a hot reload plugin created by the very excellent <a href="https://twitter.com/praeclarum">@praeclarum</a> several years ago. I have used it on many projects since then and help keep it alive by updating the VS4Mac plugin when the IDE goes through major changes (<a href="https://github.com/praeclarum/Continuous/pull/45">moving to roslyn</a>, <a href="https://github.com/praeclarum/Continuous/pull/49">using the new code editor</a>). I am a huge fan of Continuous and use it whenever I work on apps. Besides the stream above, you can see some examples of Continuous' power in videos I've posted in the past:</p>

<p><iframe width="1061" height="597" src="https://www.youtube.com/embed/OGd_J1gGYTA" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <center><em><small>Hot reloading ARKit with Continuous</small></em></center></p>

<p><iframe width="1144" height="644" src="https://www.youtube.com/embed/RMMccK_OI9w" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <center><em><small>Hot reloading a 'real world' complexity app with Continuous  </small></em></center></p>

<p>Over that time, I've learned how to make Continous sing, and have used it in all manner of scenarios, but the methods for doing so are not always pretty. I am used to them all but have to hestitate when I think about the experience that someone new might have after watching my examples. From time to time I have worked on a rethink of Continuous that addresses some of the main pain points (who knows if I will ever finish it), and as an experiment I ported some of those ideas back to the version I used on stream, which is what allowed for the deeper and cleaner integration with Xamarin.Forms Shell. Given the effectiveness of these changes, and how easy they were to integrate, I'd like to apply them back to the core library and maybe do a write up of how to effectively make use of them. </p>

<h4 id="apresentation">A presentation</h4>

<p>Off the back of the Twitch stream, I also gave a quick talk on the same topic at the <a href="https://www.meetup.com/Queensland-based-MonoTouch-and-Mono-for-Android/">Queensland C# Mobile Developers meetup group</a>'s <a href="https://www.meetup.com/Queensland-based-MonoTouch-and-Mono-for-Android/events/262502658/">June 2019 Meetup</a>. This one was a little more hurriedly put together than my normal talks, but (hopefully) gave a good walkthrough of CSharpForMarkup by example and focussed on specific code-based scenarios with a little bit of live coding to demonstrate how the library improves them. Slides for the talk are below. </p>

<p>(Yes, the fringe languages comment was a joke - I love my F#).</p>

<p>Slides (28): <a href="https://ryandavis.io/content/images/2019/06/Declarative_Code-Based_Xamarin_Forms_UI_-_Ryan_Davis_20190625.pdf">PDF</a></p>

<table>  
<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/06/declarative-ui/Slide1.PNG" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2019/06/declarative-ui/Slide5.PNG" alt="">
</td>  
</tr>

<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/06/declarative-ui/Slide10.PNG" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2019/06/declarative-ui/Slide18.PNG" alt="">
</td>  
</tr>

<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/06/declarative-ui/Slide23.PNG" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2019/06/declarative-ui/Slide27.PNG" alt="">
</td>  
</tr>  
</table>]]></description><link>http://ryandavis.io/declarative-code-based-xamarin-forms-ui/</link><guid isPermaLink="false">7795dfc4-9efc-4379-af91-570b7c3ad3a3</guid><category><![CDATA[xamarin]]></category><category><![CDATA[almost-famous]]></category><category><![CDATA[continuous]]></category><category><![CDATA[csharpformarkup]]></category><category><![CDATA[xamarin.forms]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Thu, 27 Jun 2019 22:40:43 GMT</pubDate></item><item><title><![CDATA[Practical Uses for the Mono Interpreter on Xamarin.iOS]]></title><description><![CDATA[<p>At the <a href="https://www.meetup.com/Queensland-based-MonoTouch-and-Mono-for-Android/">Queensland C# Mobile Developers meetup group</a>'s <a href="https://www.meetup.com/Queensland-based-MonoTouch-and-Mono-for-Android/events/261662146/">May 2019 Meetup</a>, I gave a talk on the recently announced Mono interpreter and mixed-mode execution support for Xamarin.iOS entitled <em>"Practical (and not so practical) Uses for the New Mono Interpreter"</em>. The talk covered a little of the basic theory behind the function of the interpreter in Xamarin.iOS, but focused mainly on demonstrations of its use in practice through several demos. These demos are outlined below and slides available at the end of the post.</p>

<h4 id="improveddevbuilditerationtime">Improved Dev Build Iteration Time</h4>

<p>Enabling the interpreter on iOS disables the AOT compilation step for any interpreted assemblies. Since performance is not typically the primary concern at development time (and Debug builds already don't represent Release performance), trading AOT performance for reduced build times is an attractive option when iterating on development of device based features. The timesave when skipping AOT can be substantial, as demonstrated by my completely unscientific and inadmissable measurements that if nothing else are at least indicative.</p>

<p><img src="http://ryandavis.io/content/images/2019/05/interp-build-times.png" alt="Comparison of debug build times including the AOT step and without"></p>

<p><strong>Practicality:</strong> <em>10/10 highly recommended, few downsides</em></p>

<h4 id="hotreload">Hot Reload</h4>

<p>The interpreter allows for 'execution' of IL without JIT'ing, allowing us to use the perform code-based hot reload on device-only features using tools like <a href="https://github.com/praeclarum/Continuous">Continuous</a>. In the talk I demonstrated the improved performance of hot reload on the device for frameworks like SpriteKit. I've written more about hot reload and the interpreter in <a href="https://ryandavis.io/hot-reloading-device-only-features-with-the-new-mono-interpreter/">an earlier blogpost</a></p>

<p><img src="https://ryandavis.io/content/images/2019/03/shader-c.gif" alt="Comparison of simulator vs device performance"></p>

<p><strong>Practicality:</strong> <em>7/10 highly recommended, but seemingly a lost art</em></p>

<h4 id="hotpatching">Hot Patching</h4>

<p>Truly transparent hotpatching is something that would likely require crazy runtime-level acrobatics. However, rolling your own hotpatching implementation is viable with support from the interpreter (whether it is advisable is an entirely different question). A home grown hotpatch implementation relies on abstraction or dedicated patch interception mechanisms. Fortunately, .NET and MVVM development favours abstraction-oriented patterns like DI and VM-based navigation, which are good examples of abstractions that can effectively be patched. In the talk, I demonstrated the remote hotpatching of two apps:</p>

<ul>
<li><p>AR Bound, which was patched to include a new menu screen and two new demos
<img src="http://ryandavis.io/content/images/2019/05/hotpatch-ar.png" alt="Overview of hotpatch architecture for ARBound"></p></li>
<li><p>A basic Prism app, which was patched with a replacement service layer implementation, and a replacement XF Content Page + ViewModel.
<img src="http://ryandavis.io/content/images/2019/05/hotpatch-prism-1.png" alt="Overview of prism hotpatch implementation"></p></li>
</ul>

<p>Whilst very effective, hot patching comes with a host of risks and uncertanties. The author struggles find the bravery within to try using this technique outside of a dev/QA environment.</p>

<p><strong>Practicality:</strong> <em>-7/10 (yes negative) so enticing, so scary</em></p>

<h4 id="embeddedrepl">Embedded REPL</h4>

<p>Though the interpreter as exposed through Xamarin.iOS is an IL interpreter, we can approximate a C# REPL by making use of the Mono Evaluator, which compiles and loads IL generated from C# source code. With mixed-mode execution, this IL can then be executed by the interpreter. For this use case I presented a demo of a REPL that could perform various C# evaluations - including UI - but also interact with the current app context. </p>

<p><video controls autoplay loop width="100%"> <br>
    <source src="https://ryandavis.io/content/images/2019/05/repl1.mp4" type="video/mp4"></video></p>

<p><video controls autoplay loop width="100%"> <br>
    <source src="https://ryandavis.io/content/images/2019/05/repl2.mp4" type="video/mp4"></video></p>

<p>Although I forgot to present it (😅), the demo included a basic remote execution capability as well, powered by SignalR. </p>

<p><img src="http://ryandavis.io/content/images/2019/05/remote-execution.png" alt=""></p>

<p>Whilst the practicality of typing code on soft-keyboard is low, the fun factor is very high.</p>

<p><strong>Practicality:</strong> <em><code>Int64.MaxValue</code>/10 REPLs are scientifically proven to be the coolest things in the known universe.</em></p>

<h3 id="slides">Slides</h3>

<p>Links for the slides are below. </p>

<p>Slides (46): <a href="https://ryandavis.io/content/images/2019/05/Practical_and_Not_So_Practical_Uses_for_the_Mono_Interpreter_-_Ryan_Davis_20190528.pdf">PDF</a></p>

<table>  
<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/05/interp/Slide1.PNG" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2019/05/interp/Slide12.PNG" alt="">
</td>  
</tr>

<tr><td width="373">  
<img src="http://ryandavis.io/content/images/2019/05/interp/Slide17.PNG" alt="">
</td>

<td width="373">

<img src="http://ryandavis.io/content/images/2019/05/interp/Slide20.PNG" alt="">
</td>  
</tr>

<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/05/interp/Slide31.PNG" alt="">
</td>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/05/interp/Slide39.PNG" alt="">
</td>  
</tr>

<tr><td width="373">  
<img src="http://ryandavis.io/content/images/2019/05/interp/Slide42.PNG" alt="">
</td>

<td width="373">

<img src="http://ryandavis.io/content/images/2019/05/interp/Slide44.PNG" alt="">
</td>  
</tr>

</table>]]></description><link>http://ryandavis.io/practical-uses-for-the-mono-interpreter/</link><guid isPermaLink="false">0cd93194-eba9-4482-ac7e-88953c344da3</guid><category><![CDATA[xamarin]]></category><category><![CDATA[almost-famous]]></category><category><![CDATA[xamarin.ios]]></category><category><![CDATA[interpreter]]></category><category><![CDATA[mono]]></category><category><![CDATA[repl]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Tue, 28 May 2019 19:57:00 GMT</pubDate></item><item><title><![CDATA[DumpEditable - an extensible inline object editor extension for LINQPad]]></title><description><![CDATA[<p>It's no secret that a big fan of <a href="https://linqpad.net/">LINQPad</a>. Though I originally started out using it for data manipulation and visualisation in a previous life, over time my LINQPad queries have become more and more sophisticated, with high levels of interactivity and extensive use of LINQPad's layout and helper functions (some  examples of more gratuitous LINQPad queries at the end of this post). LINQPad comes with sophisticated interactivity feaures - particularly since the new control suite introduced in 5.3, which gives you familiar desktop controls (textbox, radio buttons, combos, sliders etc.) with events that you can use to react to changes. Still, often I find myself wanting to quickly add a small amount of interactivity to a query without being too bothered about looks, typically in order to expose basic configuration-like functionality. After having rolled many basic configuration editors for different queries, each with varying levels of quality and capability, I decided to take a stab at putting together something a little more general purpose - the result was  <strong><a href="https://github.com/rdavisau/linqpad-dump-editable">DumpEditable</a></strong>.</p>

<p><center><video loop autoplay controls muted width="100%"> <br>
<source src="http://ryandavis.io/content/images/2019/05/dump-editable.m4v"> <br>
</video></center></p>

<h3 id="thebasics">The basics</h3>

<p>DumpEditable is essentially a property editor extension that lets you dump an editable representation of an object to the results view, so that you can then modify it interactively and respond to changes in your query. In the spirit of LINQPad's own <code>Dump</code> extension, <code>DumpEditable</code> is available on all objects (with various caveats that I am still exploring), and returns the original object - so can be chained or placed in the middle of a pipeline. The idea is that for many basic cases, what DumpEditable gives you out of the box will be 'good enough', letting you get interactivity 'for free' and allowing you to focus on writing query logic. For example, here's the output you get from dumping a basic POCO with a range of properties including strings, numbers, dates, enums and booleans:<img src="http://ryandavis.io/content/images/2019/05/basic-mini.png" alt="demonstration of basic output" title=""> <br>
Pretty cool, right? Without writing any special code, we got a fully editable implementation of our <code>Pet</code>, including reasonable handling of enums, nullable booleans and the collection of strings. Because it's handy to work with anonymous types in LINQPad, DumpEditable works with those too, and even allows them to be modified - something you can't do from C# code generally.</p>

<p>If your query has a main loop (like all the gratuituous examples) you can read updated values in the loop body. Otherwise, you'll probably want to be notified when changes are made to your object. You can do this by taking a reference to the <code>EditableDumpContainer</code> that is created by DumpEditable (which is available as an <code>out</code> parameter of <code>DumpEditable</code>) and using one of the three change notification methods it provides to be notified of updates. <img src="http://ryandavis.io/content/images/2019/04/dump-editable/change-handling.png" alt="demonstration of change handling methods" title="">  </p>

<h3 id="customisation">Customisation</h3>

<p>Whilst DumpEditable aims to play in the 'near enough is good enough' space - giving you cheap interactivity with low to no fuss - I have tried to design it with extensibility in mind. Internally, DumpEditable uses the (poorly named?) concept of <code>EditorRule</code>s to determine how to present an editable version of an object. An <code>EditorRule</code> consists of a <code>match</code> function, which returns true if the rule should be used for a given property, and a <code>getEditor</code> function, which returns the content (the 'editor') that should be rendered by LINQPad proper for matching properties. As a user of DumpEditable you can add your own <code>EditorRule</code>s, and these will take precedence over any exisiting ones. Writing one from scratch looks something like this: <img src="https://ryandavis.io/content/images/2019/04/dump-editable/editor-rule-foodselector.png" alt="" title=""> There are a few helpers in the library to make adding your own editors easier, many are covered in <a href="https://github.com/rdavisau/linqpad-dump-editable">the README</a> and demonstrated in the built-in samples, including the slider control:<img src="https://ryandavis.io/content/images/2019/05/dump-editable/slider-percent.png" alt="demonstration of slider editor" title="">  </p>

<h3 id="coolhowcaniuseit">Cool! How can I use it?</h3>

<p>You can install DumpEditable via NuGet: <a href="https://www.nuget.org/packages/DumpEditable.LINQPad">DumpEditable.LINQPad</a>. DumpEditable comes with samples that will automatically be added to your LINQPad samples pane. There's also more documentation at the <a href="https://github.com/rdavisau/linqpad-dump-editable">Github repo</a>. The focus of DumpEditable to date has been centered on my own use caes, so if you think Im missing something that should be handled by DumpEditable, please let me know on GitHub. Happy Editing! </p>

<h5 id="bonusgratuitoususesoflinqpad">Bonus: Gratuitous uses of LINQPad</h5>

<p>Below are a few clips of some wilder LINQPad queries I've put together:</p>

<ul>
<li>(left) <a href="https://gist.github.com/rdavisau/b6e23ca79fe12b54de4e">Internet is down game "AI"</a></li>
<li>(right) <a href="https://ryandavis.io/how-not-to-translate-a-videogame/">Automated video game translator</a></li>
<li>(bottom) <a href="https://twitter.com/rdavis_au/status/1109259433777852416">Automated Super Smash Brothers Classic Mode tracker</a> </li>
</ul>

<video loop autoplay controls muted width="42.25%">  
<source src="http://ryandavis.io/content/images/2016/03/the-internet-is-down.m4v">  
</video>  

<video loop autoplay controls muted width="57%">  
<source src="http://ryandavis.io/content/images/2019/04/translator-720.m4v">  
</video>  

<video loop autoplay controls muted width="100%">  
<source src="http://ryandavis.io/content/images/2019/04/smash.mp4">  
<center><em><small>left: automated 'internet is down' player, right: realtime videogame translator, center: realtime smash ultimate tracker</small></em></center>  
</video>]]></description><link>http://ryandavis.io/dumpeditable/</link><guid isPermaLink="false">77456e76-afd4-4912-a4e2-883ffbae409e</guid><category><![CDATA[code]]></category><category><![CDATA[linqpad]]></category><category><![CDATA[dumpeditable]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Tue, 14 May 2019 21:30:00 GMT</pubDate></item><item><title><![CDATA[Xamarin Forms 4.0 Challenge Submissions]]></title><description><![CDATA[<p>Over the last month or two, Xamarin has run two challenges involving completing small tasks with upcoming Xamarin.Forms 4 features - the <a href="https://devblogs.microsoft.com/xamarin/join-the-xamarin-visual-challenge/">Visual Challenge</a> and the <a href="https://devblogs.microsoft.com/xamarin/xamarin-forms-4-0-collectionview-challenge/">CollectionView Challenge</a>. As a relatively staunch <em>"Xamarin Native"</em> developer that finds himself using progressively more Forms Embedding as time passes, I thought these would be good opportunities to check out the new features and catch up on Forms progress. Below I've included the submissions I made for each challenge.</p>

<h4 id="visualchallenge">Visual Challenge</h4>

<p>Arranged by <a href="https://twitter.com/davidortinau">David Ortinau</a> - Senior Program Manager on the Forms team and deliverer of many an entertaing presentation - the Visual Challenge was centered on the new Xamarin.Forms 'Visual' feature, which aims to provide a visually identical, or nearly identical, UI experience on iOS and Android out of the box. Visual is realised through 'themes', the first one being focussed on by the team being a <a href="https://material.io/">Material Design</a> style. </p>

<p>The ask of the Visual Challenge was to use the new feature to replicate an existing screen and provide a comparison of the iOS and Android outputs as well as a write up on the experience. For the challenge, I chose to use a screen from the Qantas Frequent Flyer app, an app I regard to be quite asthetically pleasing. Though I didn't strive for a perfect reproduction (e.g. notice that I didn't recolor some images), the results overall were pretty good:</p>

<p><img src="http://ryandavis.io/content/images/2019/04/qff-home.png" alt="QFF Visual Challenge Submission"></p>

<p>The screen was easy to reproduce using Xamarin.Forms layouts and controls. For me it highlighted the importance of a good clean design, and how sensible defaults (which Material Visual offers over stock XF) can guide the user towards that. My primary piece of <a href="https://github.com/davidortinau/VisualChallenge/pull/50">feedback</a> on Visual was that defaults for some controls (like <code>Label.TextColor</code>) were not the same between platforms, which the team is planning to address. The other thing I would have liked is a more consistent tab bar appearance, and/or ability to customise it. </p>

<p>As the only submission that contained a <a href="https://github.com/davidortinau/VisualChallenge/pull/50/commits/71c01c4012befc4cde60b511628ef20b4a3d08c3#diff-0d75a25318de3a55d5860eacda96887b">coded UI</a> (yes, I checked every single other submission), I got a little shoutout from David on Twitter:</p>

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">I know XAML is super popular for creating UI in <a href="https://twitter.com/hashtag/XamarinForms?src=hash&amp;ref_src=twsrc%5Etfw">#XamarinForms</a>. Sometimes it&#39;s educational (even inspirational) to see another approach. Take a look at what <a href="https://twitter.com/rdavis_au?ref_src=twsrc%5Etfw">@rdavis_au</a> did in our VisualChallenge using <a href="https://twitter.com/vincenth_net?ref_src=twsrc%5Etfw">@vincenth_net</a>&#39;s CSharpForMarkup: <a href="https://t.co/OG1pRANP6I">https://t.co/OG1pRANP6I</a></p>&mdash; David Ortinau (@davidortinau) <a href="https://twitter.com/davidortinau/status/1112465342058643458?ref_src=twsrc%5Etfw">March 31, 2019</a></blockquote>  

<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>  

<p><em><small>Look at all the likes! I know you coded UI folks are out there... somewhere.. </small></em></p>

<p>There were lots of other good submissions to the Visual Challenge, many of which can be seen in the <a href="https://devblogs.microsoft.com/xamarin/visual-challenge-conquered/">roundup post</a>.</p>

<h4 id="collectionviewchallenge">CollectionView Challenge</h4>

<p>Hot on the heels of the Visual Challenge, <a href="https://twitter.com/paul_dipietro">Paul DiPietro</a> (PM on the Forms team) put out a similar challenge focussed on the new <code>CollectionView</code> control. <code>CollectionView</code> is the XF4 successor to <code>ListView</code>, boasting a simpler API with more power and flexibility with respect to layout options.</p>

<p>The CollectionView Challenge was similar in concept to the Visual Challenge - take an existing <code>ListView</code>, or screen with a <code>ListView</code>-like structure, and recreate it using <code>CollectionView</code> - again, with comparison screenshots and a writeup on the experience. Once more, I chose to use a screen from the Qantas app:</p>

<p><img src="http://ryandavis.io/content/images/2019/04/qff-feed.png" alt="QFF CollectionView Challenge Submission"></p>

<p>Although the layout was not terribly complex, it was a good demonstration of the benefit provided by <code>CollectionView</code> because of the grid-like arrangement - prior to <code>CollectionView</code> this would not be possible with out of the box XF controls. I found the control easy to work with and performance to good. My primary <a href="https://github.com/pauldipietro/CollectionViewChallenge/pull/20">feedback</a> on the <code>CollectionView</code> was desire for a hook or hooks to better allow content to be animated - the lack of which being my primary issue with the existing <code>ListView</code> control and the reason that even in Forms Embedding apps I still develop list-based screens natively. Whilst I was able to add a basic fade-in effect using a bit of a hack, it had some undesirable qualities (content 'fading in again' when scrolling back) and would be difficult to extend beyond this. I typically like to stagger animations or use item position to influence the way it appears, and I think there's an opportunity to allow that capability from XF. </p>

<p><em>Update: after the submission, I took a crack at hacking staggered entry animations in without modifying <code>CollectionView</code> itself, to prove the concept:</em></p>

<p><center>  </center></p>

<video height="830" controls autoplay loop style="border: 1px solid black;">  
<source src="https://ryandavis.io/content/images/2019/04/entrance-animations.mp4" type="video/mp4">  
</video>  

<p></p>

<p>The CollectionView challenge technically runs until the end of April, after which I expect there'll be a roundup post similar to that for Visual Challenge.</p>

<h4 id="overall">Overall</h4>

<p>I was impressed by the new Forms 4 features. The combination of Visual, a solid <code>CollectionView</code> class (preferably with hooks to enable the kind of animation I described) are excellent additions. Beyond that, my main want is the ability to do shared element transitions -  it appears <a href="https://twitter.com/jsuarezruiz">Javier Suárez Ruiz</a> has been experimenting with exactly this and I eagerly await the write up: </p>

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Playing with shared element transitions and animations. In addition to BindableLayout, CollectionView, etc. Soon article and sample. <a href="https://twitter.com/hashtag/XamarinForms?src=hash&amp;ref_src=twsrc%5Etfw">#XamarinForms</a> <a href="https://t.co/wo9lxFKJvN">pic.twitter.com/wo9lxFKJvN</a></p>&mdash; Javier Suárez Ruiz (@jsuarezruiz) <a href="https://twitter.com/jsuarezruiz/status/1119195954551435266?ref_src=twsrc%5Etfw">April 19, 2019</a></blockquote>  

<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>  

<p><br></p>

<p>With all this in the box, the needle continues to swing on choice of Xamarin.Forms vs Xamarin.Native. I am a big fan of my 'Native app shell, Forms-embedded pages' approach, because of the escape hatches it provides. However, as XF <a href="https://docs.microsoft.com/en-us/xamarin/xamarin-forms/app-fundamentals/shell">Shell</a> matures, and the benefits of letting that be someone else's problem increase, abandoning even embedding becomes a possibility. In any case, I'm pleased to see the progress and to have good options available.</p>]]></description><link>http://ryandavis.io/xamarin-forms-4-0-challenge-submissions/</link><guid isPermaLink="false">6195e7ed-340e-4e31-9fcc-d79b688a7736</guid><category><![CDATA[xamarin]]></category><category><![CDATA[code]]></category><category><![CDATA[xamarin-forms]]></category><category><![CDATA[visual]]></category><category><![CDATA[material]]></category><category><![CDATA[collectionview]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Mon, 22 Apr 2019 22:14:09 GMT</pubDate></item><item><title><![CDATA[Introduction to ARKit (Video)]]></title><description><![CDATA[<p>At the <a href="https://www.meetup.com/Melbourne-Xamarin-Meetup/">Melbourne Xamarin Meetup</a> <a href="https://www.meetup.com/Melbourne-Xamarin-Meetup/events/260138043/">April 2019 Meetup</a>, I gave a third (!) rendition of my talk from last year on ARKit - Apple's Augmented Reality (AR) - framework for mobile. You can see more about the original talk on my <a href="https://ryandavis.io/introduction-to-arkit/">earlier post</a>.</p>

<p>This time the meetup was streamed on Twitch, potentially signalling my transition from chatroom memer to full-time streamer (but probably not). It is also now published on YouTube, so you can watch it back at your leisure (~1h):</p>

<iframe width="952" height="536" src="https://www.youtube.com/embed/lRPzisWWats?t=257" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

<p>Again, I added a new demo to the the <a href="https://en.wikipedia.org/wiki/EarthBound">Earthbound</a>-themed demo app to keep it fresh. This time we looked at 3D object detection. I found it a bit flakey and it didn't go perfectly during the talk, but with a bit of luck the next iteration of ARKit will improve it. The updated source is <a href="https://github.com/rdavisau/ar-bound">on Github</a>, and a demo of this feature is here: </p>

<iframe width="952" height="782" src="https://www.youtube.com/embed/8JgBW64wAhA" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

<p>(Since there is no Ninten amiibo, I used the Xamarin monkey as a stand in - it was actually a lot better for object detection; amiibo may be a little too small).</p>

<p>Links for the slides are below. </p>

<p>Slides (42): <a href="https://ryandavis.io/content/images/2019/04/Introduction_to_ARKit_-_Ryan_Davis_-_20190417.pdf">PDF</a></p>

<table>  
<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2019/04/intro-to-arkit-melbourne.png" alt="">
</td>  
<td>  
<img src="http://ryandavis.io/content/images/2018/11/Slide6.PNG" alt="">
</td>  
</tr>  
<tr> 

<td width="373">  
<img src="http://ryandavis.io/content/images/2018/11/Slide10.PNG" alt="">
</td>

<td width="373">  
<img src="http://ryandavis.io/content/images/2018/11/Slide11.PNG" alt="">
</td>  
</tr>

<tr>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2018/11/Slide18.PNG" alt="">
</td>  
<td width="373">  
<img src="http://ryandavis.io/content/images/2018/11/Slide29.PNG" alt="">
</td>  
</tr>  
</table>]]></description><link>http://ryandavis.io/introduction-to-arkit-video/</link><guid isPermaLink="false">93929185-af87-478f-8fd6-df562d20323b</guid><category><![CDATA[xamarin]]></category><category><![CDATA[almost-famous]]></category><category><![CDATA[arkit]]></category><category><![CDATA[earthbound]]></category><dc:creator><![CDATA[Ryan Davis]]></dc:creator><pubDate>Thu, 18 Apr 2019 22:36:00 GMT</pubDate></item></channel></rss>