invertase / react-native-firebase

🔥 A well-tested feature-rich modular Firebase implementation for React Native. Supports both iOS & Android platforms for all Firebase services.
https://rnfirebase.io
Other
11.63k stars 2.2k forks source link

Firebase Storage fail on JSON/Text data #302

Closed fungilation closed 6 years ago

fungilation commented 7 years ago

Environment

  1. Target Platform (e.g. iOS, Android): iOS
  2. Development Operating System (e.g. macOS Sierra, Windows 10): macOS Sierra 10.12.6
  3. Build tools (Xcode or Android Studio version, iOS or Android SDK version, if relevant): Xcode, iOS 10.3
  4. React Native version (e.g. 0.45.1): 0.46.4
  5. RNFirebase Version (e.g. 2.0.2): 2.0.5

I'm at a loss in making Storage work with putFile() (copying https://github.com/invertase/react-native-firebase/issues/170#issuecomment-316345243). My sample code:

import RNFS from 'react-native-fs'
// need package https://github.com/Caligatio/jsSHA
import jsSHA from "jssha"

import RNFirebase from 'react-native-firebase'
const firebase = RNFirebase.initializeApp({ debug: false, persistence: true })
firebase.database().goOnline()
firebase.auth().signInAnonymously()

      // contentHash is a sha3 hash of result.content, and result is object to be stored in Firebase
      let contentHash = new jsSHA("SHA3-256", "TEXT")
      contentHash.update(result.content)
      contentHash = contentHash.getHash("HEX")

      // create a path you want to write to, using contentHash
      var path = `${RNFS.DocumentDirectoryPath}/${contentHash}.json`

// long sample text to be written in Firebase here. This would be equivalent to my original code, with result being json coming from external API:
// result = JSON.stringify(result)
result = '{"title":"Why Apple’s glasses won’t include ARKit","content":"<div id=\"ws0\"></div>About the only thing more exciting than <a href=\"https://medium.com/super-ventures-blog/why-is-arkit-better-than-the-alternatives-af8871889d6a\" class=\"markup--anchor markup--p-anchor\">ARKit</a> is the thought of Apple Glasses that run ARKit. There are rumors and rumors-of-rumors all claiming &#x201C;when not if&#x201D; they are coming. I&#x2019;m going to attempt to answer the &#x201C;<em class=\"markup--em markup--p-em\">when&#x201D;</em> by looking in depth at the &#x201C;<em class=\"markup--em markup--p-em\">what</em>&#x201D; technology is needed. I know glasses are being worked on at Apple, and the prototypes are state of the art. I also know what it takes to build a full-stack wearable AR HMD, having built fully functional prototypes from scratch. There are a bunch of elements that need to work before a consumer product can exist. These elements don&#x2019;t all exist today (even at the state of the art).</p><figure id=\"5d29\" class=\"graf graf--figure graf-after--p\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1600/1*kFjdcYiye0e0Y8Qi_ZIVYg.jpeg\"><figcaption class=\"imageCaption\">Freudian slip?</figcaption></figure><p id=\"4886\" class=\"graf graf--p graf-after--figure\"><div id=\"ws1\"></div>This post is a &#x201C;bottom-up&#x201D; look at what is needed before Apple (or anyone) can ship the consumer product we all want. I&#x2019;ll avoid hyperbole &amp; reliance on magic R&amp;D breakthroughs. I&#x2019;ll list what tech problems still need to be solved, give some indication of where the state of the art is today for each element, and make an educated guess as to when it will be &#x201C;consumer ready&#x201D;.</p><div id=\"ws2\"></div>I don&#x2019;t have any specific knowledge of Apple&#x2019;s roadmaps. However I do have <a href=\"https://medium.com/super-ventures-blog/why-is-arkit-better-than-the-alternatives-af8871889d6a\" class=\"markup--anchor markup--p-anchor\">a <strong class=\"markup--strong markup--p-strong\">great</strong> deal of knowledge of the underlying enabling technologies </a>&amp; design problems, a large number of friends who have seen small unreleased pieces of Apple&#x2019;s current AR work, and enough time in the tech industry to see how history rhymes wrt platform transitions. From there I&#x2019;m just joining dots based on my experience designing/building similar things over the last 9 years.</p><div id=\"ws3\"></div>First lets define the end state. ARKit Glasses for the sake of this post means fashionable glasses that we want to wear in public, that can render digital content as if it&#x2019;s part of the real world, and are internet connected with an App ecosystem. There&#x2019;s no point getting pedantic about this definition. There will be great products that we buy &amp; love that deliver part of this vision before it arrives (we all loved our Palm Pilots &amp; Blackberries at the time) and there will be better &amp; better versions that solve problems shipped in version 1. What I&#x2019;m trying to explore is: when could Apple ship <em class=\"markup--em markup--p-em\">their</em> version of AR Glasses that &#x201C;just work&#x201D; <em class=\"markup--em markup--p-em\">and</em> consumers desire them.</p><div id=\"ws4\"></div>As an aside, when it comes to glasses/eyewear/HMDs the term &#x201C;AR&#x201D; is muddied. There&#x2019;s a ton of confusion about what is a wearable Heads-Up-Display (HUD) and what are AR Glasses. They both put a transparent display on your face (maybe you look right through it or maybe it&#x2019;s off to the side. Maybe it covers one eye or both), but the UX (and overall product) is completely different. Marketing departments have been calling HUDs AR Glasses, and calling AR Glasses things like Mixed Reality or Holographic displays. The difference is simple to detect: if all the content is &#x201C;stuck to the display&#x201D; i.e. you turn your head and <em class=\"markup--em markup--p-em\">all</em> the content stays on the display, it&#x2019;s a HUD. Google Glass is a good example, also Epson Moverio (in fact most head mounted displays are HUDs, though you can often add an AR SDK like Vuforia and turn them into <em class=\"markup--em markup--p-em\">simple</em> AR Glasses. This is what ODG do for example). A HUD alone isn&#x2019;t AR. It&#x2019;s just a regular display you wear on your head. If the content can be &#x201C;stuck to the real world&#x201D; like ARKit enables with a phone, or Hololens, or Meta, <em class=\"markup--em markup--p-em\">then</em> you have a pair of AR Glasses. AR Glasses are a superset. They are a HUD with a see-through lens plus 6dof tracking plus a lot more&#x2026;</p>FYI this <a href=\"http://arglassesbuyersguide.com/\" class=\"markup--anchor markup--p-anchor\">AR Glasses buyers guide</a> is the definitive list of AR Glasses &amp; wearable HUD&#x2019;s that are on the market</p><div id=\"ws5\"></div><figure id=\"2447\" class=\"graf graf--figure graf-after--p\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1600/1*Epey2JP6pnc01G6iZDWA8A.jpeg\"><figcaption class=\"imageCaption\">This is the UX of AR Glasses. The content stays in the world where you put it. When you turn your head, the TV stays in place on the wall. If you were using Google Glass (which is a HUD, not AR), the TV would stay in the display above your eye all the time as you move&#xA0;around.</figcaption></figure><p id=\"82bf\" class=\"graf graf--p graf-after--h4\"><div id=\"ws6\"></div>Here&#x2019;s 8 big problems that still need to be solved for Apple (well, anyone) to ship Consumer AR Glasses. The important thing to understand here is <em class=\"markup--em markup--p-em\">why</em> each problem needs to be solved.</p><div id=\"ws7\"></div><li id=\"6023\" class=\"graf graf--li graf-after--p\">Fashionable and cool hardware. This is most important IMO. It&#x2019;s hard to think of anything that is a more personal wearable object than what is worn around our eyes. Whatever we wear says a lot about us (good or bad). There&#x2019;s nothing stopping great designers designing great fashionable eyewear today, the constraint here today is the tech. Enterprise AR will get traction first purely because people will be paid to wear ugly versions that have the tech problems solved but not the design problems.</li></ul><div id=\"ws8\"></div><figure id=\"c77c\" class=\"graf graf--figure graf--layoutOutsetLeft graf-after--li\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1200/1*LdMRJIzAUxjCezF3KmWE-g.jpeg\"><figcaption class=\"imageCaption\">A high-res OLED microdisplay from e-magin, suitable for a HMD (or a Watch). The image is projected into the waveguide which spreads it over the plastic lens in the glasses so you can see it in front of your&#xA0;face.</figcaption></figure><ul class=\"postList\"><div id=\"ws9\"></div><li id=\"1d9d\" class=\"graf graf--li graf-after--figure\">Optics that fit inside a consumer frame and are bright enough for daylight use, sharp enough to easily read text &amp; have a wide enough field of view. There&#x2019;s really not much point to AR products that only work indoors. The whole point of AR is that you can get out &amp; about &amp; wear them as part of your daily life. The optics need to work outdoors on sunny days, and be sharp enough to read text. FoV is less important to AR than VR as the utility use-cases aren&#x2019;t as immersive as games which depend on peripheral vision. FoV mostly matters to avoid the distration from clipping the content. Today the state of the art <em class=\"markup--em markup--li-em\">that can be manufactured in volume</em> is an OLED microdisplay powering an injection molded waveguide. Single focal plane. We&#x2019;re a year or two away from nicely functional better displays, and probably another year or so after that before those better displays can be manufactured in volume. I heard a story a couple of years ago about a very technical strategic investor from a large OEM who passed on investing in Magic Leap early on. Not because he thought the displays were snake oil, they weren&#x2019;t, they worked &amp; the demos were amazing, but because of the challenges he understood in scaling up the manufacturing volume and maintaining a good yield without optical defects. I haven&#x2019;t heard that ML has solved these challenges, and The Information reported recently they may be using a different &amp; easier to manufacture type of display. This is the reason the military still uses OLEDs &amp; waveguides today (not for a lack of funding &amp; research into better systems!). Also the unit cost to produce a wave guide is very low ($1&#x2013;2) if it is injection molded. There&#x2019;s a 7 figure up-front one-time optics design cost, and the hi-res micro-oleds have been expensive but they are dropping in price fast. Also as yet no one has applied smartphone scale production economics onto hmd manufacturing.</li></ul><div id=\"ws10\"></div><figure id=\"1fb8\" class=\"graf graf--figure graf--layoutOutsetLeft graf-after--li\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1200/1*Uh7VpBkpfcZdKpIrwD-U3w.jpeg\"><figcaption class=\"imageCaption\">Some of <a href=\"https://medium.com/@marknb00\" class=\"markup--user markup--figure-user\">Mark Billinghurst</a> research from 2014. Five years out from then would&#xA0;mean&#x2026;</figcaption></figure><ul class=\"postList\"><div id=\"ws11\"></div><li id=\"7716\" class=\"graf graf--li graf-after--figure\">Hardware to capture &#x201C;natural&#x201D; input from a user and software to reliably determine the user&#x2019;s intent from the input. This is big and not close to being solved. The more you think about it the more difficult you realize the problems is. Lots of efforts are going into perfecting single modes of input (perfect voice recognition, perfect gestures, perfect computer vision etc) but even if you can perfect one mode (I doubt anyone can), there are going to be lots of circumstances where the user will never want to use that mode. Eg voice input during a movie (a watch tap is better) or gestures while in public (voice might be better). To avoid being horribly compromized &amp; embarrassing to use (at least some times) a mutli-modal system is needed with an AI to choose which input system best captures the users intent at that time. The state of the art here is research done by my SV partner <a href=\"https://medium.com/@marknb00\" class=\"markup--user markup--li-user\">Mark Billinghurst</a> in his lab, on <a href=\"https://www.slideshare.net/marknb00/hands-and-speech-in-space-multimodal-input-for-augmented-reality\" class=\"markup--anchor markup--li-anchor\">multi-modal input for AR</a>. We understand Apple and <a href=\"https://www.youtube.com/watch?v=QTz1zQAnMcU&amp;t=164s\" class=\"markup--anchor markup--li-anchor\">MSFT</a> and Google (I recently heard a rumor of a motion tracked ring) are all working on multi-modal systems. I&#x2019;d guess these are 9&#x2013;12 months away from shipping in products (in a simple form). Airpods/Siri + iPhone are probably the best example of sort-of multi-modal input that is widely available on the market today. You can control your phone by either tapping the display or your ear or talking. It&#x2019;s still very basic. You are the AI deciding which mode to use.</li><li id=\"719c\" class=\"graf graf--li graf-after--li\">Sensors and Processors that give high enough performance-per-watt to work for long periods without heat/weight ergonomic concerns. Better integration also means more freedom to be fashionable for the designers. Moving from &#x201C;works on my phone&#x201D; to &#x201C;works on a see through display&#x201D; means finding big improvements in power-per-watt and motion-to-photon engineering. Just because some AR feature works nice on my iPhone, doesn&#x2019;t mean it can just be copied over to glasses. 3D reconstruction, Machine Learning applications and coherent rendering are examples where the state of the art can (just) be achieved today on a phone, but they are quite heavy users of CPU/GPU which drive up heat and suck battery life. Allow 12 months from &#x201C;works on phone&#x201D; until you could expect it to &#x201C;work on glasses&#x201D;. In some cases custom silicon may be the best solution (eg Apple&#x2019;s W1 chip, or Movidius&#x2019; CVGPU or <a href=\"https://www.theverge.com/2017/7/24/16018558/microsoft-ai-coprocessor-hololens-hpu\" class=\"markup--anchor markup--li-anchor\">MSFTs forthcoming HPU v2</a>), which might mean an even longer wait.</li></ul><figure id=\"01d4\" class=\"graf graf--figure graf--layoutOutsetLeft graf-after--li\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1200/1*dzA2W4c4LUsSZhvScHrUiQ.png\"><figcaption class=\"imageCaption\">With semantic segmentation, the AR system can begin to distinguish individual objects in a scene, label them and pass that information up from ARkit to the App so it can decide appropriate interactions with&#xA0;content.</figcaption></figure><ul class=\"postList\"><div id=\"ws12\"></div><li id=\"1d59\" class=\"graf graf--li graf-after--figure\">An ability for the system to <a href=\"https://www.youtube.com/watch?v=SuezycZ9Ca0\" class=\"markup--anchor markup--li-anchor\">understand the structure &amp; semantics of the world in 3D in real-time</a> (ARKit is the very beginning of this, providing a 6dof pose relative to yourself, and a basic ground plane). People are only just becoming aware of how important this is, and it will become a widely understood problem over the next 12 months. There&#x2019;s no point building an AR App unless it interacts with the physical world in some way. That&#x2019;s the definition of &#x201C;AR-Native&#x201D;. If there&#x2019;s no digital+physical interaction, then a regular smartphone app will give a better UX. Tabletop 3D games on a flat table are the perfect example of &#x201C;why bother doing this in AR&#x201D;? In order to interact with the world, the system needs to capture or download a virtual 3D model of the world in front of me (see my prev post). Once this problem is understood by developers &amp; systems start to serve up the world to them (like Tango, Hololens &amp; Occipital can do today), the next problem of &#x201C;how do I layout my cool content onto a 3D scene when I don&#x2019;t know in advance what the scene will look like?&#x201D; becomes the big problem! Right now very simple heuristics can be used, but they are really crude and developers need to roll their own. Someone needs to develop a procedural content layout tool which is simple enough for developers to use (MSFT had a good shot at this with <a href=\"https://www.microsoft.com/en-us/research/publication/flare-fast-layout-for-augmented-reality-applications/\" class=\"markup--anchor markup--li-anchor\">FLARE</a> a few years ago. They really are way ahead!). Film SFX software which can procedurally generate armies of digital Orcs in a scene is today&#x2019;s state of the art. I know of one startup that has this tech in an architecture product &amp; is thinking about how to apply it to AR. I haven&#x2019;t seen any other solutions or people working on the problem. The state of the art today in 3D reconstruction is <a href=\"http://www.robots.ox.ac.uk/~victor/infinitam/\" class=\"markup--anchor markup--li-anchor\">real-time dense large-scale 3D reconstruction on a phone with a depth camera</a>. Doing this without the depth camera is about 9 months of research away from working, and another 9&#x2013;12 months away from productization. 3D Semantic segmentation state of the art is that it works in a basic way on a phone today via academic code, but again at least a couple of years to be really good enough for consumers. Large-scale tracking and 3D Apple Maps type integration is still research today. Some research implementations exist, and Google Tango VPS is state of the art today. Probably 12 months until we see basic versions of this in consumer phones.</li></ul><div id=\"ws13\"></div><figure id=\"8a98\" class=\"graf graf--figure graf--layoutOutsetLeft graf-after--li\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1200/1*qbT4D554dEbTMvNS-pqldQ.jpeg\"><figcaption class=\"imageCaption\">A poor quality image of two players who can see each other&#x2019;s buggy&#x2019;s in real-time racing around on the same table. Multi-user shared virtual AR environment in&#xA0;2013</figcaption></figure><ul class=\"postList\"><div id=\"ws14\"></div><li id=\"8fa1\" class=\"graf graf--li graf-after--figure\">An ability to share/communicate our experiences with others. At Dekko we built one of, if not the first commercially available IOS multi-player AR games (<a href=\"http://www.augmented.org/blog/2013/06/racing-ar-together/\" class=\"markup--anchor markup--li-anchor\">Tabletop speed&#x200A;</a>&#x2014;&#x200A;a digital R/C buggy which could collide/occlude with real-world objects). The lift in enjoyment playing with others who also could see your buggy was exponential. Right now nearly all AR demos and apps assume you are the only AR user in the world. When lots of people have AR capable devices we will want to share &#x201C;what we are seeing&#x201D; and also share an &#x201C;augmented view of ourselves&#x201D;. <a href=\"https://medium.com/@CactusWool\" class=\"markup--user markup--li-user\">Charlie Sutton</a> &amp; Cliff Warren figured this out at Samsung and it was an extremely compelling (and obvious in hindsight) product prototype. Imagine that virtual hat from a Snap Filter being something you could virtually wear all day, and everyone (or only people you filter) else wearing AR Glasses could see it on you. Basic tech to do this is understood but needs more robust outdoor localization SLAM tech &amp; an ability to share &#x201C;SLAM maps&#x201D; with each other so we are both using the same coordinate system relative to each other. Tango VPS is state of the art today, still a year or more until it&#x2019;s really solid across the industry. Consumer UX/Apps and APIs etc that take advantage of this are another 6&#x2013;12 months after that. One of the most exciting use-cases here is the concept of <a href=\"https://www.youtube.com/watch?v=7d59O6cfaM0\" class=\"markup--anchor markup--li-anchor\">Holoportation</a>.</li></ul><figure id=\"32be\" class=\"graf graf--figure graf-after--li\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1600/1*1u7qsS0jv6BNhJgUjVFUrw.jpeg\"><figcaption class=\"imageCaption\">Why wouldn&#x2019;t I want to wear this all day, not just in my selfie app? This is how I want everyone else (who arewearing AR glasses) to see&#xA0;me!</figcaption></figure><ul class=\"postList\"><div id=\"ws15\"></div><li id=\"07b0\" class=\"graf graf--li graf-after--figure\">An AR-Native HMD &#x201C;GUI&#x201D; which means an entirely new paradigm for &#x201C;applications&#x201D; as the Desktop metaphor we&#x2019;ve used for the past 40 years doesn&#x2019;t really hold anymore. This is such a huge design rabbit hole to go down&#x2026; suffice to say a 4x6 grid of square icons filling our transparent display FoV won&#x2019;t work. There&#x2019;s an opportunity for an entirely new app eco-system apart from IOS/Android to emerge, as almost none of the prior 10 years of work building smartphone apps will come across into AR. Interestingly I found that designers who have an industrial design background picked up AR design better than App designers (or god-forbid game designers, who really struggled with empathizing with real-world problems people have). I figured this is because industrial designers are trained to solve real-world problems for real people via physical 3D products. With AR you just leave the product/solution in its digital state (and maybe give it a bit more interactivity). State of the art today is in the military, with interesting work going on with self-driving car UIs, and some 3D gaming UIs. Hololens is the best readily available AR GUI to try. Try it &amp; you&#x2019;ll see how far there is still to go. Academic Research is pretty solid (<a href=\"https://www.amazon.com/3D-User-Interfaces-Practice-paperback/dp/0321980042/ref=sr_1_2?s=books&amp;ie=UTF8&amp;qid=1502296281&amp;sr=1-2&amp;refinements=p_27%3ADoug+Bowman\" class=\"markup--anchor markup--li-anchor\">Doug Bowman</a> is the man. Not the Doug Bowman @stop who used to work at twitter, but the 3D UI Professor from <a href=\"https://research.cs.vt.edu/3di/user/123\" class=\"markup--anchor markup--li-anchor\">Virginia Tech</a>) but research needs to find its way into products. Rumors are that Apple has a mature 3D GUI solution that works nicely in their labs today.</li></ul><div id=\"ws16\"></div><figure id=\"f2ef\" class=\"graf graf--figure graf-after--li\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1600/1*u3N5Vmpw1o45OzZr-qSndg.jpeg\"><figcaption class=\"imageCaption\">Magic Leap&#x2019;s version of an AR GUI. Some elements are in the HUD (music controls in the corner, the clock), some elements in the world (YouTube &amp; toy). Mail app could be either. How the heck do you choose to close &amp; swtich to a new &#x201C;app&#x201D;? Maybe pointing at the physical coffee cup is the way you tell the &#x201C;starbucks app&#x201D; to deliver me a coffee? Smartphone app eco-systems won&#x2019;t work&#xA0;here.</figcaption></figure><ul class=\"postList\"><div id=\"ws17\"></div><li id=\"3363\" class=\"graf graf--li graf-after--figure\">An ecosystem of apps to enable useful and entertaining use-cases. Like the smartphone, there&#x2019;s no &#x201C;killer app&#x201D; for AR. There needs to be a great UX/Gui/Input, which then lets us connect to the internet &amp; do what we already like to do, in a way that takes advantage of what only AR can deliver.</li></ul><div id=\"ws18\"></div>This is a <strong class=\"markup--strong markup--p-strong\">long</strong> list of problems still to be solved before ARKit Glasses can exist. I don&#x2019;t think Apple will take the Hololens approach &amp; try to solve everything at once in a single product. To *really* understand this list, get hold of a Hololens and look at it through the lens of the 8 items above. Hololens is <strong class=\"markup--strong markup--p-strong\">amazing</strong> because it was the first product to actually solve ALL the problems above and ship in a single integrated product for enterprise (not military) pricing. That had never been done before! It&#x2019;s the <a href=\"https://en.wikipedia.org/wiki/Nokia_9210_Communicator\" class=\"markup--anchor markup--p-anchor\">Nokia Communicator</a> of AR! Not very useful, but it proves it can be done. Yes, all the solutions to those 8 points are very simple and way short of where they need to be, but they did what no one had ever done before. It also means Microsoft probably understands these problems better than anyone else&#x2026;</p><div id=\"ws19\"></div>It makes sense to me that Apple will take a two-track product strategy to eventually get to an ARKit Glasses product. Apple is famously design led, and historically their products solve only one design problem at a time, as user behavior is hard to change. A great example of this is the original iPhone, which solved smartphone input via Multi-touch. This changed everything. All the other capabilities of the original iPhone already existed in other phones on the market.</p><div id=\"ws20\"></div>One set of problems are technology problems. Computer Vision, sensors, SDKs, 3D developer tools etc. The other set, which are just as difficult to solve (and under-appreciated in the AR community), are Design problems. This includes what use-case(s) do the products address, how do I interact with the UX, how does this product express my identity (I am going to wear it on my face!), how is this product desirable?</p>I see Technology problems being solved on the iPhone platform via an increasingly more capable ARKit SDK.</p>I see Design problems being solved by evolving Apple&#x2019;s wearable products (watch, airpods, non-ARKit glasses&#x2026; and HomePod to a degree) into a &#x201C;constellation&#x201D; of products that all work seamlessly together.</p>Then they&#x2019;ll merge both tracks to create ARKit Glasses.</p><div id=\"ws21\"></div>Some really interesting predictions are what will be the intermediary products &amp; market opportunities between now &amp; then. Products from Apple and from others. Blackberries, Nokias, Palm Pilots were all hugely successful products in the &#x201C;transition years&#x201D; between being able to connect a phone to the internet and iPhone/Android dominance.</p><div id=\"ws22\"></div>ARKit is a big deal for the AR industry. Not just because it&#x2019;s got Apple&#x2019;s marketing fairy dust, but because it means that developers now have a high quality &#x201C;6dof pose&#x201D; to use in AR Apps. <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">EVERYTHING</em></strong> in AR is built on top of a high quality pose. Solid registration of digital assets in the world. Indoor &amp; outdoor navigation. 3D reconstruction algorithms. Large scale tracking systems. Collision &amp; Occlusion of digital &amp; real. Gesture detection. All these only work as good as the pose the tracking system provides. My <a href=\"https://medium.com/super-ventures-blog/why-is-arkit-better-than-the-alternatives-af8871889d6a\" class=\"markup--anchor markup--p-anchor\">earlier article</a> explains more about how Apple did this &amp; how it all works.</p><div id=\"ws23\"></div>Apple launched ARKit on the iPhone hardware platform, being well aware that a hand-held form-factor is severely compromised for AR use-cases (too long a topic to cover here, I&#x2019;ll write more about this). Apple knows this because many of the leaders of their AR teams knew this before they joined Apple! The reason to launch ARKit on iPhone is that consumer expectations will be much lower, as ARKit v1 is really quite a limited SLAM system (though excellent at what it does do). Developers have the opportunity to learn how to build AR Apps (very different than smartphone or VR apps!). Consumers see lots of great demos on YouTube &amp; start to be educated about the potential of AR. Apple gets to bring better algorithms to market on the device that has the most CPU/GPU/Sensor power without trying to deal with wearable hardware constraints.</p><div id=\"ws24\"></div><figure id=\"5e51\" class=\"graf graf--figure graf-after--p\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1600/1*esqTUr562SXNwh4DQGDDfw.jpeg\"><figcaption class=\"imageCaption\">Lots of processing power. Compromized form-factor.</figcaption></figure><p id=\"ef01\" class=\"graf graf--p graf-after--figure\"><div id=\"ws25\"></div>When will ARKit tech be good enough for ARGlasses? My view is that the system will need to handle large scale / outdoor tracking &amp; localization, as well as dense 3D reconstruction in real-time via a monocular RGB camera. This is the minimum for ARKit to enable &#x201C;AR as it&#x2019;s popularly imagined to be&#x201D;. Note also that tech problems to enable natural input also need to be solved.</p><div id=\"ws26\"></div>This is the fun stuff. At heart I&#x2019;m a technologist who understands business. But I&#x2019;m married to an AR Designer (@silkamiesnieks) and both my AR product development teams were co-led by Designers (Silka at Dekko, and @cactuswool at Samsung) who taught me a <strong class=\"markup--strong markup--p-strong\">lot</strong>. As an aside&#x2026; I still haven&#x2019;t seen other AR product teams do this, which I think is a missed opportunity (I bet Apple &amp; Snap have designers involved at the top of the AR product org). I believe (from hands on experience!) that building AR Glasses is more of a &#x201C;Design Problem&#x201D; than a technical problem, even though the tech problems are about as hard as they get in tech. Prioritizing only the tech leads to a &#x201C;boil the ocean&#x201D; scenario, which one or two AR OEMs seem to be struggling with. The super-hard tech problems will be solved before the super-harder-er design problems are solved, but it&#x2019;s the design decisions that determine which tech problems to solve.</p>I had the pleasure of seeing computer vision experts sitting across the table from senior product designers, learning each others language, and observing how it changed how both teams approach AR problems.</p><div id=\"ws27\"></div>I&#x2019;ve been very impressed with Apple&#x2019;s softly-softly approach to AR hardware. Airpods are IMO the first successful AR hardware product. I usually get blank looks when I state this, but they solve 2 critical problems for AR Glasses. They enable voice input/output for a natural mode of interaction (plus basic touch/tap). They also let us &#x201C;augment&#x201D; our surroundings with sound, a simple example would be the audio guides at every museum. This is audio AR. I&#x2019;ve taken to wearing mine continously while out &amp; about. In restaurants &amp; bars etc. No one cares, no one has punched me, and staff/friends speak to me assuming I can hear them (which I can as I pause music in advance). It surprised me how quickly they have been accepted as an all-day product. I predict we will see more &#x201C;AI enabled headphones&#x201D; (hearables) from other AR platform OEMs in the next 12 months. Startup acquisitions happening in 3, 2, 1&#x2026;</p><div id=\"ws28\"></div><figure id=\"1627\" class=\"graf graf--figure graf-after--p\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1600/1*0AboI7iwIGy_Wq0rV8DGug.jpeg\"><figcaption class=\"imageCaption\">Augmented Reality that is cool to wear. Augmented Audio, who cares about graphic displays?</figcaption></figure><p id=\"d5f5\" class=\"graf graf--p graf-after--figure\"><div id=\"ws29\"></div>The other significant aspect of Airpods is that they are physically separate from the Glasses. This reduces the size and cost of the eventual Glasses product. It should be obvious that we all wear multiple pairs of regular glasses (sunglasses, reading glasses, sports glasses etc) so why wouldn&#x2019;t we want the same with our AR Glasses? The industry has (probably correctly) assumed we won&#x2019;t buy a $1500 face-computer multiple times for fashion, but if the glasses are little more than a plastic frame, an injection moulded waveguide and a beefed up W1 chip (which can handle video), plus maybe smartphone camera &amp; IMU, the BOM unit cost could be in the low tens of dollars&#x2026;</p>&#x2026;of course, as Tim Cook said earlier this week that <a href=\"https://www.cnbc.com/2017/08/01/tim-cook-augmented-reality-will-make-iphone-even-more-essential.html\" class=\"markup--anchor markup--p-anchor\">the iphone will be &#x201C;even more essential&#x201D; for augmented reality</a></p><div id=\"ws30\"></div>&#x2026;also OLED micro-displays are today&#x2019;s best way to drive a wearable display, so iPhone 8 shifting to OLED is also useful. Soon OLED microdisplays will be superseded for HMDs by fancier display technology with multiple depths of focus, maybe retinal projection &amp; other magic, but today its OLED.</p><div id=\"ws31\"></div>I don&#x2019;t think Apple will ship a camera in their first Glasses. They will be a HUD. More bluntly&#x2026; an Apple Watch on your face. Solve one design problem at a time (new display form factor), and re-use an existing use-case. I&#x2019;ve spoken to people who have held &amp; used Apple&#x2019;s prototypes, and they didn&#x2019;t have a camera. Allowing for the 3D printed frame, they looked good, a lot like RayBan Wayfarers. The recent Reddit thread was accurate. The use-cases being tested were in regard to &#x201C;notifications&#x201D;. This would be a natural fit for a Siri service similar to Google&#x2019;s Assistant (and also synergistic with Airpods). I heard about 9 months ago that there were no glasses on Apple&#x2019;s 12 month marketing roadmap. I expect we&#x2019;ll see some Glasses ship next year (not AR capable, but with a Heads Up Display for notifications), with this year taken up with marketing ARKit technology on iPhone. There&#x2019;s a decent chance a camera could ship in Apple Glasses v1 (no technical reason not too) but that would be unlike Apple when a simpler product could ship which only solves the single design problem of getting people to wear a wearable display.</p><figure id=\"b83f\" class=\"graf graf--figure graf--layoutOutsetLeft graf-after--p\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1200/1*Fapv8ErZY6NtUTDo9sKMtA.jpeg\"><figcaption class=\"imageCaption\">Marc Newson&#x2019;s recent high-end fashionable glasses&#xA0;range</figcaption></figure><p id=\"1bcd\" class=\"graf graf--p graf-after--figure\"><div id=\"ws32\"></div>The most important advantage re the Glasses is that Apple has Marc Newson on the team. <a href=\"https://www.google.com/search?q=marc+newson&amp;client=safari&amp;rls=en&amp;source=lnms&amp;tbm=isch&amp;sa=X&amp;ved=0ahUKEwjk9P7rrr7VAhWKjlQKHSjiB-sQ_AUICygC&amp;biw=1324&amp;bih=1276\" class=\"markup--anchor markup--p-anchor\">Look him up</a> if you don&#x2019;t really know who he is. He knows how to design cool glasses. One lesson I learnt at Samsung, is that Industrial Designers view glasses as one of the most difficult physical products to design, mostly because everyone&#x2019;s face shape &amp; taste is different, and there&#x2019;s physically almost nothing to them (ie can&#x2019;t add features to keep everyone happy, the design has to be very pure &amp; simple).</p><figure id=\"36b4\" class=\"graf graf--figure graf--layoutOutsetLeft graf-after--p\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1200/1*qu-oJhQ-jYZoV07HrlPbfg.jpeg\"><figcaption class=\"imageCaption\">But what about the tech specs??!!!</figcaption></figure><p id=\"1692\" class=\"graf graf--p graf-after--figure\"><div id=\"ws33\"></div>Apple has also been learning how to sell Fashion. The Apple Watch has been invaluable in this regard. I still don&#x2019;t know any tech specs about the watch, but I know Beyonce wore a launch edition in Vogue. Apple&#x2019;s also learnt how to sell a range of colors and bands (and price points) so that people can express their individuality. No other tech company working on AR (except maybe Snap, who is learning fast) understands this.</p><div id=\"ws34\"></div>Anyone who thinks the mass market won&#x2019;t view AR Glasses as first &amp; foremost a fashion purchase is probably still wearing Google Glass. They need to be designed <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">&amp; marketed &amp; priced </em></strong>with that in mind.</p><figure id=\"bc6d\" class=\"graf graf--figure graf--layoutOutsetLeft graf-after--h4\"><img class=\"progressiveMedia-noscript js-progressiveMedia-inner\" src=\"https://cdn-images-1.medium.com/max/1200/1*RDqs2Fa0XQV2AALScFdmdw.jpeg\"><figcaption class=\"imageCaption\">This is pretty much what I think Apple&#x2019;s AR &#x201C;constellation&#x201D; will look like to&#xA0;wear&#x2026;</figcaption></figure><p id=\"1cd0\" class=\"graf graf--p graf-after--figure\"><div id=\"ws35\"></div>I don&#x2019;t find it hard to imagine in a few years that people will be wearing their (cool) Apple Watch along with some (cool) Apple Glasses and Airpods all day. They may have a 2nd pair of glasses in their bag. And an iPhone in their pocket. The Watch can serve as a secondary way to give input (tap the watch to select in the glasses). It&#x2019;s easy to take your Glasses off when you walk into a bar, and still be connected via Airpods &amp; Watch. You can still pull out your iPhone if you need a display to share (or a keyboard), or want some more privacy.</p>In terms of when?&#xA0;&#x2026;</p>late 2018</p><div id=\"ws36\"></div>I could easily imagine non-ARKit Apple Glasses (display, some basic input, mostly controlled from the phone or AirPods, no camera) in 2018. Very simple (&#x201C;Trivial! Wasted opportunity!!&#x201D; the AR industry will cry). Emphasis on fashion, lots of frame styles &amp; price points. Marketed in fashion magazines. ARKit on iPhone expanded to support 3D reconstruction, plus some features to improve realistic content rendering (light source detection?) &amp; multi-user AR experiences.</p>2019</p><div id=\"ws37\"></div>non-ARKit Glasses now with with Camera (for Photos/Video, more like Snap spectacles). Look for a &#x201C;W2&#x201D; wireless chip that supports video. ARKit on iPhone expands to support large scale tracking &amp; large-scale mono RGB 3D reconstruction. Integrates deeply with Apple Maps &amp; Siri.</p><div id=\"ws38\"></div>Probably another company ships their version of &#x201C;full-stack&#x201D; consumer AR glasses, which work OK. Enterprise AR really starts to gain traction. A Few Mobile ARKit apps show strong metrics.</p>2020</p><div id=\"ws39\"></div>Version 3 of the Glasses with lots of issues fixed, better power, displays, wireless, input, GUI etc. ARKit gets lots of tweaks &amp; enhancements mostly to handle larger areas and more simultaneous users and content creation tools are mature enough that most developers can easily use them. People complain that Apple is missing the AR Glasses market as decent competitor products start to ship.</p>2021</p><div id=\"ws40\"></div>The merged ARKit Glasses product range ships to huge success! We arrive at the <a href=\"https://www.amazon.com/dp/B004M8SR2O/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1\" class=\"markup--anchor markup--p-anchor\">Rainbows End</a>.</p><em class=\"markup--em markup--p-em\">What about ARkit on iPhone8, it&#x2019;s awesome and will be on 400 million phones this year and a gazillion billion next year?</em></p><div id=\"ws41\"></div>Yes and Yes. However&#x2026; Handheld AR on Mobile has a number of inherent UX challenges to overcome (its handheld for a start), and ARKit itself is limited today in the UX it can enable (eg no collision/occlusion, no absolute coordinates). I think there are some great use-cases that are possible with Mobile AR, but they aren&#x2019;t the type of thing to capture 10&apos;s of millions of daily users next year. You could still build a great startup on iPhone ARkit that exits for $100m+ in the next few years, but unlikely you&#x2019;ll be the next Google. Topic for another post.</p><div id=\"ws42\"></div><em class=\"markup--em markup--p-em\">AR will always be a niche product. The market potential is similar to Games Consoles (10&#x2019;s of millions of devices). How can AR Glasses possibly be a smartphone sized market (billions of devices)?</em></p><div id=\"ws43\"></div>Looking at AR as fundmentally an entertainment product (like VR is mostly viewed as) does restrict the market size. However I have believed for many years that the real potential of AR is as a communication and information device. These are the things we use our smartphones for today, and if AR products can deliver compelling user experiences that let us communicate better &amp; understand the world better, then it&#x2019;s a smartphone sized market.</p><div id=\"ws44\"></div>Alternatively if you look at VR as something that lets me &#x201C;escape&#x201D; to another place, where AR enhances where I already am, then just by looking at how a typical person spends the hours in their day tips the balance towards AR. We maybe spend a coupe of hours a day &#x201C;escaping&#x201D; into a book or TV etc, but most of our hours are spent engaging with the world.</p><div id=\"ws45\"></div>Whether we prefer to communicate with each other in VR (FB spaces or RecRoom type of thing) or AR (<a href=\"https://www.microsoft.com/en-us/research/project/holoportation-3/\" class=\"markup--anchor markup--p-anchor\">Holoportation</a>) is hard to predict. My bet is on AR, though its harder to build the tech.</p><em class=\"markup--em markup--p-em\">What about VR, that is where all the action is?</em></p><div id=\"ws46\"></div>Prior to ARKit I would have agreed &amp; said &#x201C;just wait for AR&#x201D;. Now I don&#x2019;t have to say that&#x2026;&#xA0;:-)</p><em class=\"markup--em markup--p-em\">Why won&#x2019;t other companies solve these problems before Apple?</em></p><div id=\"ws47\"></div>Microsoft is years ahead of everyone else, and has all the software &amp; hardware assets to win this race (An operating system + developers, Bing Maps, xbox 3d graphics, oem hw partners, msft research). Google has great AI &amp; device ecosystem advantages. I think other companies will ship products with all the features needed to be called consumer AR glasses, but I don&#x2019;t think they&#x2019;ll be able to sell them, as it will be a fashion led buying decision by the consumer.</p><em class=\"markup--em markup--p-em\">I&#x2019;m wrong because&#x2026;..</em></p><div id=\"ws48\"></div>There are so many ways I could be wrong with these predictions. Most likely is that Apple has come up with a creative design solution to deliver a user experience without needing the complete tech problems fully solved.</p><div id=\"ws49\"></div>There&#x2019;s also a pretty good chance that one of the &#x201C;outsiders&#x201D; in mobile wins a dominant market position (eg like Google usurped MSFT &amp; Intel &amp; the P.C. OEMs). This could be Snap or a startup in a garage, or even a MSFT comeback. This sort of potential industry shakeup hasn&#x2019;t been viable for over a decade. These are exciting times. Facebook is interesting&#x2026; topic of another post. The scale of the smartphone supply chain eco-system &amp; the fact that better hardware integration enables a more fashionable design gives an advantage to players who can leverage this. The transition to a fashion led purchase with completely new App paradigms creates an opportunity for disruptive new entrants.</p>","author":"Matt Miesnieks","date_published":"2017-08-10T01:55:41.199Z","lead_image_url":"https://cdn-images-1.medium.com/max/1200/1*RDqs2Fa0XQV2AALScFdmdw.jpeg","dek":null,"next_page_url":null,"url":"https://medium.com/super-ventures-blog/why-apples-glasses-won-t-include-arkit-46a1d40381fe","domain":"medium.com","excerpt":"It could be 2021 before they do","word_count":5341,"direction":"ltr","total_pages":1,"rendered_pages":1,"urlDisplay":"medium.com › super-ventures-blog","blurb":"About the only thing more exciting than ARKit is the thought of Apple Glasses that run ARKit. There are rumors and rumors-of-rumors all claiming “when not if” they are ...","sourceURI":false,"contentSummary":["About the only thing more exciting than ARKit is the thought of Apple Glasses that run ARKit. There are rumors and rumors-of-rumors all claiming “when not if” they are","I’ll list what tech problems still need to be solved, give some indication of where the state of the art is today for each element, and make an educated guess as to when it will be “consumer ready”.","However I do have a great deal of knowledge of the underlying enabling technologies & design problems, a large number of friends who have seen small unreleased pieces of Apple’s current AR work, and enough time in the tech industry to see how history rhymes wrt platform transitions.","ARKit Glasses for the sake of this post means fashionable glasses that we want to wear in public, that can render digital content as if it’s part of the real world, and are internet connected with an App ecosystem.","The difference is simple to detect: if all the content is “stuck to the display” i.e. you turn your head and all the content stays on the display, it’s a HUD.","This is the UX of AR Glasses.","Here’s 8 big problems that still need to be solved for Apple (well, anyone) to ship Consumer AR Glasses.","It’s hard to think of anything that is a more personal wearable object than what is worn around our eyes.","The image is projected into the waveguide which spreads it over the plastic lens in the glasses so you can see it in front of your face.","This is the reason the military still uses OLEDs & waveguides today (not for a lack of funding & research into better systems!).","Some of Mark Billinghurst research from 2014.","This is big and not close to being solved.","I know of one startup that has this tech in an architecture product & is thinking about how to apply it to AR.","A poor quality image of two players who can see each other’s buggy’s in real-time racing around on the same table.","Tango VPS is state of the art today, still a year or more until it’s really solid across the industry.","Hololens is the best readily available AR GUI to try.","Maybe pointing at the physical coffee cup is the way you tell the “starbucks app” to deliver me a coffee?","There needs to be a great UX/Gui/Input, which then lets us connect to the internet & do what we already like to do, in a way that takes advantage of what only AR can deliver.","This is a long list of problems still to be solved before ARKit Glasses can exist.","It makes sense to me that Apple will take a two-track product strategy to eventually get to an ARKit Glasses product.","This includes what use-case(s) do the products address, how do I interact with the UX, how does this product express my identity (I am going to wear it on my face!), how is this product desirable?","Some really interesting predictions are what will be the intermediary products & market opportunities between now & then.","ARKit is a big deal for the AR industry.","Developers have the opportunity to learn how to build AR Apps (very different than smartphone or VR apps!).","Lots of processing power.","This is the minimum for ARKit to enable “AR as it’s popularly imagined to be”.","Prioritizing only the tech leads to a “boil the ocean” scenario, which one or two AR OEMs seem to be struggling with.","Airpods are IMO the first successful AR hardware product.","Augmented Reality that is cool to wear.","This reduces the size and cost of the eventual Glasses product.","Soon OLED microdisplays will be superseded for HMDs by fancier display technology with multiple depths of focus, maybe retinal projection & other magic, but today its OLED.","I’ve spoken to people who have held & used Apple’s prototypes, and they didn’t have a camera.","The most important advantage re the Glasses is that Apple has Marc Newson on the team.","I still don’t know any tech specs about the watch, but I know Beyonce wore a launch edition in Vogue.","They need to be designed & marketed & priced with that in mind.","I don’t find it hard to imagine in a few years that people will be wearing their (cool) Apple Watch along with some (cool) Apple Glasses and Airpods all day.","I could easily imagine non-ARKit Apple Glasses (display, some basic input, mostly controlled from the phone or AirPods, no camera) in 2018.","ARKit on iPhone expands to support large scale tracking & large-scale mono RGB 3D reconstruction.","Enterprise AR really starts to gain traction.","People complain that Apple is missing the AR Glasses market as decent competitor products start to ship.","The merged ARKit Glasses product range ships to huge success!","You could still build a great startup on iPhone ARkit that exits for $100m+ in the next few years, but unlikely you’ll be the next Google.","The market potential is similar to Games Consoles (10’s of millions of devices).","These are the things we use our smartphones for today, and if AR products can deliver compelling user experiences that let us communicate better & understand the world better, then it’s a smartphone sized market.","Alternatively if you look at VR as something that lets me “escape” to another place, where AR enhances where I already am, then just by looking at how a typical person spends the hours in their day tips the balance towards AR.","My bet is on AR, though its harder to build the tech.","Now I don’t have to say that… :-)","I think other companies will ship products with all the features needed to be called consumer AR glasses, but I don’t think they’ll be able to sell them, as it will be a fashion led buying decision by the consumer.","Most likely is that Apple has come up with a creative design solution to deliver a user experience without needing the complete tech problems fully solved.","The scale of the smartphone supply chain eco-system & the fact that better hardware integration enables a more fashionable design gives an advantage to players who can leverage this."],"topSentences":[{"before":"","sentence":"About the only thing more exciting than ARKit is the thought of Apple Glasses that run ARKit. There are rumors and rumors-of-rumors all claiming “when not if” they are","after":" coming. I’m going to attempt to answer the “when” by looking in depth at the “what” technology is needed. I know glasses are being worked on at Apple, and the prototypes are state of the art. I also know what it takes to build a full-stack wearable AR HMD, having built fully functional prototypes from scratch. There are a bunch of elements that need to work before a consumer product can exist. These elements don’t all exist today (even at the state of the art).","paragraph":"About the only thing more exciting than ARKit is the thought of Apple Glasses that run ARKit. There are rumors and rumors-of-rumors all claiming “when not if” they are coming. I’m going to attempt to answer the “when” by looking in depth at the “what” technology is needed. I know glasses are being worked on at Apple, and the prototypes are state of the art. I also know what it takes to build a full-stack wearable AR HMD, having built fully functional prototypes from scratch. There are a bunch of elements that need to work before a consumer product can exist. These elements don’t all exist today (even at the state of the art)."},{"before":"As an aside, when it comes to glasses/eyewear/HMDs the term “AR” is muddied. There’s a ton of confusion about what is a wearable Heads-Up-Display (HUD) and what are AR Glasses. They both put a transparent display on your face (maybe you look right through it or maybe it’s off to the side. Maybe it covers one eye or both), but the UX (and overall product) is completely different. Marketing departments have been calling HUDs AR Glasses, and calling AR Glasses things like Mixed Reality or Holographic displays. ","sentence":"The difference is simple to detect: if all the content is “stuck to the display” i.e. you turn your head and all the content stays on the display, it’s a HUD.","after":" Google Glass is a good example, also Epson Moverio (in fact most head mounted displays are HUDs, though you can often add an AR SDK like Vuforia and turn them into simple AR Glasses. This is what ODG do for example). A HUD alone isn’t AR. It’s just a regular display you wear on your head. If the content can be “stuck to the real world” like ARKit enables with a phone, or Hololens, or Meta, then you have a pair of AR Glasses. AR Glasses are a superset. They are a HUD with a see-through lens plus 6dof tracking plus a lot more…","paragraph":"As an aside, when it comes to glasses/eyewear/HMDs the term “AR” is muddied. There’s a ton of confusion about what is a wearable Heads-Up-Display (HUD) and what are AR Glasses. They both put a transparent display on your face (maybe you look right through it or maybe it’s off to the side. Maybe it covers one eye or both), but the UX (and overall product) is completely different. Marketing departments have been calling HUDs AR Glasses, and calling AR Glasses things like Mixed Reality or Holographic displays. The difference is simple to detect: if all the content is “stuck to the display” i.e. you turn your head and all the content stays on the display, it’s a HUD. Google Glass is a good example, also Epson Moverio (in fact most head mounted displays are HUDs, though you can often add an AR SDK like Vuforia and turn them into simple AR Glasses. This is what ODG do for example). A HUD alone isn’t AR. It’s just a regular display you wear on your head. If the content can be “stuck to the real world” like ARKit enables with a phone, or Hololens, or Meta, then you have a pair of AR Glasses. AR Glasses are a superset. They are a HUD with a see-through lens plus 6dof tracking plus a lot more…"},{"before":"A high-res OLED microdisplay from e-magin, suitable for a HMD (or a Watch). ","sentence":"The image is projected into the waveguide which spreads it over the plastic lens in the glasses so you can see it in front of your face.","after":"","paragraph":"A high-res OLED microdisplay from e-magin, suitable for a HMD (or a Watch). The image is projected into the waveguide which spreads it over the plastic lens in the glasses so you can see it in front of your face."},{"before":"","sentence":"With semantic segmentation, the AR system can begin to distinguish individual objects in a scene, label them and pass that information up from ARkit to the App so it can decide appropriate interactions with content.","after":"","paragraph":"With semantic segmentation, the AR system can begin to distinguish individual objects in a scene, label them and pass that information up from ARkit to the App so it can decide appropriate interactions with content."},{"before":"","sentence":"An ability for the system to understand the structure & semantics of the world in 3D in real-time (ARKit is the very beginning of this, providing a 6dof pose relative to yourself, and a basic ground plane).","after":" People are only just becoming aware of how important this is, and it will become a widely understood problem over the next 12 months. There’s no point building an AR App unless it interacts with the physical world in some way. That’s the definition of “AR-Native”. If there’s no digital+physical interaction, then a regular smartphone app will give a better UX. Tabletop 3D games on a flat table are the perfect example of “why bother doing this in AR”? In order to interact with the world, the system needs to capture or download a virtual 3D model of the world in front of me (see my prev post). Once this problem is understood by developers & systems start to serve up the world to them (like Tango, Hololens & Occipital can do today), the next problem of “how do I layout my cool content onto a 3D scene when I don’t know in advance what the scene will look like?” becomes the big problem! Right now very simple heuristics can be used, but they are really crude and developers need to roll their own. Someone needs to develop a procedural content layout tool which is simple enough for developers to use (MSFT had a good shot at this with FLARE a few years ago. They really are way ahead!). Film SFX software which can procedurally generate armies of digital Orcs in a scene is today’s state of the art. I know of one startup that has this tech in an architecture product & is thinking about how to apply it to AR. I haven’t seen any other solutions or people working on the problem. The state of the art today in 3D reconstruction is real-time dense large-scale 3D reconstruction on a phone with a depth camera. Doing this without the depth camera is about 9 months of research away from working, and another 9–12 months away from productization. 3D Semantic segmentation state of the art is that it works in a basic way on a phone today via academic code, but again at least a couple of years to be really good enough for consumers. Large-scale tracking and 3D Apple Maps type integration is still research today. Some research implementations exist, and Google Tango VPS is state of the art today. Probably 12 months until we see basic versions of this in consumer phones.","paragraph":"An ability for the system to understand the structure & semantics of the world in 3D in real-time (ARKit is the very beginning of this, providing a 6dof pose relative to yourself, and a basic ground plane). People are only just becoming aware of how important this is, and it will become a widely understood problem over the next 12 months. There’s no point building an AR App unless it interacts with the physical world in some way. That’s the definition of “AR-Native”. If there’s no digital+physical interaction, then a regular smartphone app will give a better UX. Tabletop 3D games on a flat table are the perfect example of “why bother doing this in AR”? In order to interact with the world, the system needs to capture or download a virtual 3D model of the world in front of me (see my prev post). Once this problem is understood by developers & systems start to serve up the world to them (like Tango, Hololens & Occipital can do today), the next problem of “how do I layout my cool content onto a 3D scene when I don’t know in advance what the scene will look like?” becomes the big problem! Right now very simple heuristics can be used, but they are really crude and developers need to roll their own. Someone needs to develop a procedural content layout tool which is simple enough for developers to use (MSFT had a good shot at this with FLARE a few years ago. They really are way ahead!). Film SFX software which can procedurally generate armies of digital Orcs in a scene is today’s state of the art. I know of one startup that has this tech in an architecture product & is thinking about how to apply it to AR. I haven’t seen any other solutions or people working on the problem. The state of the art today in 3D reconstruction is real-time dense large-scale 3D reconstruction on a phone with a depth camera. Doing this without the depth camera is about 9 months of research away from working, and another 9–12 months away from productization. 3D Semantic segmentation state of the art is that it works in a basic way on a phone today via academic code, but again at least a couple of years to be really good enough for consumers. Large-scale tracking and 3D Apple Maps type integration is still research today. Some research implementations exist, and Google Tango VPS is state of the art today. Probably 12 months until we see basic versions of this in consumer phones."},{"before":"An AR-Native HMD “GUI” which means an entirely new paradigm for “applications” as the Desktop metaphor we’ve used for the past 40 years doesn’t really hold anymore. ","sentence":"This is such a huge design rabbit hole to go down… suffice to say a 4x6 grid of square icons filling our transparent display FoV won’t work.","after":" There’s an opportunity for an entirely new app eco-system apart from IOS/Android to emerge, as almost none of the prior 10 years of work building smartphone apps will come across into AR. Interestingly I found that designers who have an industrial design background picked up AR design better than App designers (or god-forbid game designers, who really struggled with empathizing with real-world problems people have). I figured this is because industrial designers are trained to solve real-world problems for real people via physical 3D products. With AR you just leave the product/solution in its digital state (and maybe give it a bit more interactivity). State of the art today is in the military, with interesting work going on with self-driving car UIs, and some 3D gaming UIs. Hololens is the best readily available AR GUI to try. Try it & you’ll see how far there is still to go. Academic Research is pretty solid (Doug Bowman is the man. Not the Doug Bowman @stop who used to work at twitter, but the 3D UI Professor from Virginia Tech) but research needs to find its way into products. Rumors are that Apple has a mature 3D GUI solution that works nicely in their labs today.","paragraph":"An AR-Native HMD “GUI” which means an entirely new paradigm for “applications” as the Desktop metaphor we’ve used for the past 40 years doesn’t really hold anymore. This is such a huge design rabbit hole to go down… suffice to say a 4x6 grid of square icons filling our transparent display FoV won’t work. There’s an opportunity for an entirely new app eco-system apart from IOS/Android to emerge, as almost none of the prior 10 years of work building smartphone apps will come across into AR. Interestingly I found that designers who have an industrial design background picked up AR design better than App designers (or god-forbid game designers, who really struggled with empathizing with real-world problems people have). I figured this is because industrial designers are trained to solve real-world problems for real people via physical 3D products. With AR you just leave the product/solution in its digital state (and maybe give it a bit more interactivity). State of the art today is in the military, with interesting work going on with self-driving car UIs, and some 3D gaming UIs. Hololens is the best readily available AR GUI to try. Try it & you’ll see how far there is still to go. Academic Research is pretty solid (Doug Bowman is the man. Not the Doug Bowman @stop who used to work at twitter, but the 3D UI Professor from Virginia Tech) but research needs to find its way into products. Rumors are that Apple has a mature 3D GUI solution that works nicely in their labs today."},{"before":"","sentence":"This is a long list of problems still to be solved before ARKit Glasses can exist.","after":" I don’t think Apple will take the Hololens approach & try to solve everything at once in a single product. To *really* understand this list, get hold of a Hololens and look at it through the lens of the 8 items above. Hololens is amazing because it was the first product to actually solve ALL the problems above and ship in a single integrated product for enterprise (not military) pricing. That had never been done before! It’s the Nokia Communicator of AR! Not very useful, but it proves it can be done. Yes, all the solutions to those 8 points are very simple and way short of where they need to be, but they did what no one had ever done before. It also means Microsoft probably understands these problems better than anyone else…","paragraph":"This is a long list of problems still to be solved before ARKit Glasses can exist. I don’t think Apple will take the Hololens approach & try to solve everything at once in a single product. To *really* understand this list, get hold of a Hololens and look at it through the lens of the 8 items above. Hololens is amazing because it was the first product to actually solve ALL the problems above and ship in a single integrated product for enterprise (not military) pricing. That had never been done before! It’s the Nokia Communicator of AR! Not very useful, but it proves it can be done. Yes, all the solutions to those 8 points are very simple and way short of where they need to be, but they did what no one had ever done before. It also means Microsoft probably understands these problems better than anyone else…"},{"before":"Apple launched ARKit on the iPhone hardware platform, being well aware that a hand-held form-factor is severely compromised for AR use-cases (too long a topic to cover here, I’ll write more about this). Apple knows this because many of the leaders of their AR teams knew this before they joined Apple! The reason to launch ARKit on iPhone is that consumer expectations will be much lower, as ARKit v1 is really quite a limited SLAM system (though excellent at what it does do). ","sentence":"Developers have the opportunity to learn how to build AR Apps (very different than smartphone or VR apps!).","after":" Consumers see lots of great demos on YouTube & start to be educated about the potential of AR. Apple gets to bring better algorithms to market on the device that has the most CPU/GPU/Sensor power without trying to deal with wearable hardware constraints.","paragraph":"Apple launched ARKit on the iPhone hardware platform, being well aware that a hand-held form-factor is severely compromised for AR use-cases (too long a topic to cover here, I’ll write more about this). Apple knows this because many of the leaders of their AR teams knew this before they joined Apple! The reason to launch ARKit on iPhone is that consumer expectations will be much lower, as ARKit v1 is really quite a limited SLAM system (though excellent at what it does do). Developers have the opportunity to learn how to build AR Apps (very different than smartphone or VR apps!). Consumers see lots of great demos on YouTube & start to be educated about the potential of AR. Apple gets to bring better algorithms to market on the device that has the most CPU/GPU/Sensor power without trying to deal with wearable hardware constraints."}],"topParagraphs":"About the only thing more exciting than ARKit is the thought of Apple Glasses that run ARKit. There are rumors and rumors-of-rumors all claiming “when not if” they are coming. I’m going to attempt to answer the “when” by looking in depth at the “what” technology is needed. I know glasses are being worked on at Apple, and the prototypes are state of the art. I also know what it takes to build a full-stack wearable AR HMD, having built fully functional prototypes from scratch. There are a bunch of elements that need to work before a consumer product can exist. These elements don’t all exist today (even at the state of the art).The difference is simple to detect: if all the content is “stuck to the display” i.e. you turn your head and all the content stays on the display, it’s a HUD.The image is projected into the waveguide which spreads it over the plastic lens in the glasses so you can see it in front of your face.With semantic segmentation, the AR system can begin to distinguish individual objects in a scene, label them and pass that information up from ARkit to the App so it can decide appropriate interactions with content.An ability for the system to understand the structure & semantics of the world in 3D in real-time (ARKit is the very beginning of this, providing a 6dof pose relative to yourself, and a basic ground plane).This is such a huge design rabbit hole to go down… suffice to say a 4x6 grid of square icons filling our transparent display FoV won’t work.This is a long list of problems still to be solved before ARKit Glasses can exist.Developers have the opportunity to learn how to build AR Apps (very different than smartphone or VR apps!).","summaryWordsCount":1586,"readOrigBtnText":"Browser","wsUserId":"RDHaMejicjMl4qdiiv0BGwOzZ7l2","wsQuery":"GOOGLE_SEARCH: apple glasses ARKit","wsUpdatedAt":1503099954084,"wsJsonVer":1}'

      // write the file
      RNFS.writeFile(path, result, 'utf8')
        .then((success) => {
          console.log('FILE WRITTEN at ', path)
        })
        .catch((err) => {
          console.log('Write error:', err.message, err.code)
        })

      firebase.storage()
        .ref('/resultJsonByContentHash/' + contentHash)
        .putFile(path)
        .then(uploadedFile => {
          console.log('Uploaded to firebase:', uploadedFile)
        })
        .catch(err => {
          console.log('Firebase putFile error:', err)
        })

      RNFS.unlink(path)

Chrome console output from firebase.storage() would always fail, with

Firebase putFile error: Error: An unknown error has occurred.
    at createErrorFromErrorData (NativeModules.js:121)
    at NativeModules.js:78
    at MessageQueue.__invokeCallback (MessageQueue.js:301)
    at MessageQueue.js:118
    at MessageQueue.__guard (MessageQueue.js:228)
    at MessageQueue.invokeCallbackAndReturnFlushedQueue (MessageQueue.js:117)
    at debuggerWorker.js:71

Here's a chunk of xcode console output on error. I'm not sure what's "unexpected" about the responses as it says:

2017-07-19 12:10:35.041 <app>[36795:4216983] unexpected response data (uploading to the wrong URL?)
{
  "name": "resultJsonByContentHash/6ed53b7c3644a09c6580df6618a83878dca12adfb96808191614d81870dfe06e",
  "bucket": "<app>.appspot.com",
  "generation": "1500491432167974",
  "metageneration": "1",
  "contentType": "application/json; charset=UTF-8",
  "timeCreated": "2017-07-19T19:10:32.114Z",
  "updated": "2017-07-19T19:10:32.114Z",
  "storageClass": "STANDARD",
  "size": "142",
  "md5Hash": "k51sAnGeQxIcYOpOMRNqUg==",
  "contentEncoding": "identity",
  "contentDisposition": "inline; filename*=utf-8''6ed53b7c3644a09c6580df6618a83878dca12adfb96808191614d81870dfe06e",
  "crc32c": "i/ZmwQ==",
  "etag": "CKaUtpaGltUCEAE=",
  "downloadTokens": "22acd7d5-e282-423f-96ac-491470c31fca"
}
2017-07-19 12:10:35.043 <app>[36795:4216983] beginChunkFetches has unexpected upload status for headers {
    "Access-Control-Allow-Origin" = "*";
    "Content-Length" = 694;
    "Content-Type" = "application/json; charset=UTF-8";
    Date = "Wed, 19 Jul 2017 19:10:32 GMT";
    Server = UploadServer;
    "access-control-expose-headers" = "X-Firebase-Storage-XSRF";
    "alt-svc" = "quic=\":443\"; ma=2592000; v=\"39,38,37,36,35\"";
    "x-content-type-options" = nosniff;
    "x-guploader-uploadid" = "AEnB2UpMWzXC6bN6D0doRq06GXyj5QfDHUs_32sHigo83Wvoby86cHt2AOlNpTcPB4U8MO_186ogWDcTtthag1uZql6iAFHFdA";
}
2017-07-19 12:10:35.044 <app>[36795:4216983] Premature failure: upload-status:"(null)"  location:(null)

FYI, it's not my security rules being the issue, I haven't changed the default rules (below) and I have firebase.auth().signInAnonymously() before the storage calls.

service firebase.storage {
  match /b/{bucket}/o {
    match /{allPaths=**} {
      allow read, write: if request.auth != null;
    }
  }
}

Looking at my Firebase console, I would see files created at the correct path, but with all files with this as content: (instead of the serialized json first written to local file)

{"contentType":"application\/octet-stream","name":"resultJsonByContentHash\/eada2f3ab7072495fd96bdfd69345ebd9a1c5caa24986fce35043f935d6791a3"}
chrisbianca commented 7 years ago

@fungilation Are you able to upload a full example somewhere so that we can help you debug it? I've been looking at putFile on iOS but only have code to run through the image / asset upload path, not the specific file upload path.

If I can put something together using your example then I'll be able to help.

fungilation commented 7 years ago

Is RNFirebase v3 making changes to interface with Firebase Storage? If so, I should test against that, in giving you a more full and isolated test case.

chrisbianca commented 7 years ago

@fungilation The interface with Firebase Storage is remaining the same in v3 so I just need a full example of generating the JSON file, saving it to disk, then using putFile for me to be able to debug and fix.

fungilation commented 7 years ago

Ok I've updated my original post with how exactly I generate the sha3 hash for path name, with the jsSHA package, and pasted a sample result object (stringified) which is the text to be stored in Firebase.

Let me know @chrisbianca if you need more details still in setting up a reproducible test case. The result object includes data from both public and private APIs so I think testing on a sample is easier.

fungilation commented 6 years ago

Checking in again. Do you still need more details on reproducing?

chrisbianca commented 6 years ago

@fungilation Sorry, things have been a bit hectic our end, but this is very much on my list as soon as we get v3 out of the door in the next couple of weeks... I think there should be enough information available for me to reproduce - I will shout if that's not the case.

fungilation commented 6 years ago

Great! I'm not rushing, and I understand you want to take care of v3 first and test/fix against that after release.

waqqas commented 6 years ago

@chrisbianca I am facing the same problem. I am uploading an image using RNFirebase v 3.0.2 and instead of the content of the image, the server uploads a jSON with "name","size" and "contentType" keys. The putFile() returns an error "an unknown error has occurred."

Any help to resolve the issue would be welcome

dethell commented 6 years ago

I am having the exact same behavior as @waqqas mentioned. Was there any resolution to this problem?

dethell commented 6 years ago

Bump

dethell commented 6 years ago

This is a cross-post from several related issues where I had commented:

Found it, in my case at least. The URI of the image from the ImagePicker had a % character in it from the local app cache. This percent was being URI encoded to '%25' which resulted in the file not being found by the putFile code. Adding a decodeURI call around the uri fixed the issue.

let fileUri = decodeURI(pickerResult.uri)

Salakar commented 6 years ago

Thanks all for reporting this issue and the discussion around it.

We're aware that Storage has fallen behind slightly on React Native Firebase and would like to bring it up to speed again. I will close this issue for now and track it as well as other issues collectively over on the Storage improvements proposal to be addressed in a future release. See #1260

Salakar commented 6 years ago

@dethell @fungilation @waqqas; pushed up quite a few fixes/tweaks for storage in preparation for the v4.3.0 release.

The two relevant for this issue are:

Keep and eye out for the v4.3.0 release.

Thanks for the feedback 👌

Salakar commented 6 years ago

Fixes now live in the v4.3.0 release.


Loving react-native-firebase and the support we provide? Please consider supporting us with any of the below:

Salakar commented 5 years ago

Hey @fungilation - just thought you'd like to know the v6 re-write PR is up and well into dev, which includes many improvements including putString() support - so you'd no longer need to write a file 🎉

PR & Changelog: https://github.com/invertase/react-native-firebase/pull/2043