Watson - NSS Front-End Capstone
By Dre Randaci
Summary: An iOS mobile application built on React-Native that allows users to take pictures of any object and get prediction classifications from IBM Watson's Visual Recognition service. Classifications are also provided with links to Google and Wikipedia for quick search results. Users can save pictures to their camera roll and access any picture they have previously taken for classification. Image details include a Google Maps view with marker for location discovery.
The app consumes a C#/.NET Web API (here) built specifically for this project.
Features
- Prediction classifications for countless categories
- Facial detection and classification
- Native iOS Swift transpiled from JavaScript/React-Native
- Camera Roll access
- AirBnB/Google Maps integration
- Multiscreen navigation
Technologies
- IBM Watson's Visual Recognition service
- Airbnb Maps
- Custom React-Native components
- React-Native style libraries:
- React-Native Camera
- React-Native Elements
- React-Native Typography
- Transpiled from React-Native to Swift utilizing Xcode
Running the app on your device
- Make sure you have Xcode installed
- Clone the repo
npm install
in the root of the directory
pod install
in the ios
folder
- Start Xcode and
File > Open...
- Navigate to the repo and inside the
ios
folder and select the Watson.xcworkspace
directory
- Select a device to build in the simulator or connect your iPhone to your computer via USB and select that device (follow Xcode's instructions if running from your iPhone)
- Build the project
Screen Shots