Fotoherkenning Paddenstoelen: een vloek of een zegen?

Fotoherkenning Paddenstoelen: een vloek of een zegen? Coolia 2020(3)

In recent years there has been an explosion in the availability of apps for smartphones that can be
used to help with mushroom identification in the field. There are a number of approaches available, ranging from those apps that identify mushroom automatically based on the use of Artificial Intelligence (AI) and automated Image Recognition, through those that require the user to use traditional dichotomous keys or multi-access keys, to those that may only have a range of images without a clear system for identification of any species of interest

The coolia article seems related to this article Artificial Intelligence for plant identification on smartphones and tablets

Related documents--------------------------------------------------------------------------------------
BACHELORARBEIT MAGIC MUSHROOM APP -Mit Deep Learning essbare Pilze erkennen met Python!!!

Deep Shrooms: classifying mushroom images (Python)

Shroomnet: Kunstliches neuronales Netz f ¨ ur die Bestimmung von Pilzarten !!

Artificial Intelligence for plant identification on smartphones and tablets


Apps for identification mushrooms---------------------------------------------------------------------

Deens svampeatlas


iNaturalist Seek

Google Lens

Publicado el 05 de julio de 2020 a las 03:02 PM por optilete optilete


Nice artickle

Anotado por ahospers hace casi 4 años

Vision Model Updates

iNaturalist currently uses vision models in two main places:
1) a private web-based API used by the website and the iNaturalist iOS and Android apps, and
2) within the recently updated Seek app.

When Seek 2.0 was released in April, it included a different vision model than we were using on the web. At that time the web-based model was a third-generation model we started using in early 2018. That web-based model was trained with the idea it would be run on servers, and servers can be configured to have far more computing power than a mobile device. As a result that model was far too large to be run on mobile devices.

Early this year, with an updated Seek in mind, we started another training run with two main goals:
-shrinking the file size of the model, and
-allowing it to recommend taxonomic ranks other than species (e.g. families, genera, etc.).

The mobile version of the model needs to be small in terms of file size to minimize the amount of data app users would need to download. Smaller models can also be used by more devices as they need fewer resources to run (e.g. memory, battery), and can generate results faster, which is important for Seek's real-time camera vision results. These models take a lot of time and money to train, so we also wanted a model that could be simultaneously trained to produce a large web-based version and a smaller version for use in mobile devices.

Unfortunately, shrinking the file size like this slightly decreased model accuracy compared to the larger web-based version (kind of similar to image compression), and we found that was an unavoidable tradeoff. We take this into account when processing the model results, and on average for a similar error rate, the mobile version might recommend a taxon at a higher taxonomic rank than the web-based version. The taxon results we show to users shouldn't be less accurate, but they may be less specific.

More Species Represented
We wanted the model to include more species data, even when some species don't have enough photos to be recognized as species level. There are some species with a small amount of photos that, if we trained on that small set of photos, likely wouldn't have enough information for the model to reliable recognize those species.

Our 2018 model only included taxa at rank species. We set a threshold for number of photos, and species below the threshold were not included. We could still recommend higher taxa by doing some post-processing of results, but the model itself would only assign scores to species. In our latest training run we allowed the photos from species under the threshold to be rolled up into their ancestor taxa until the threshold was reached, and we allowed the model to assign scores to these non-species nodes. This allows more species to be represented in this newer model, sometimes at the genus level mixed up with photos of other species in the genus under our threshold. Now instead of not knowing anything about these species, the model can at least identify the genus or family, etc.

Anotado por optilete hace casi 4 años

Bevat dit alle artikelen ? Je had toen ook een duitstalig artikel dacht ik

Anotado por ahospers hace mas de 3 años

Met < hr > krrijg je een mooie line

Anotado por ahospers hace mas de 3 años

Leuk stuk, zat net planten te kijken maar dat was geen goede test..deze is veel beter

Anotado por ahospers hace mas de 3 años

Welke test/review is goed (url) en welke slecht (url)?

Anotado por optilete hace mas de 3 años
pagina 11

'Gebruikte software en hardware
Het trainen van modellen in dit project is gedaan met
behulp van TensorFlow, het platform van Google voor
machine learning.5
De code is geschreven in Python 3,
gebruik makend van Keras, een open source-pakket
dat fungeert als interface voor TensorFlow.
Alle preprocessing van de data (downloaden,
bestanden lezen, etc.) is door de bij dit project
betrokken software-ontwikkelaar geschreven in
Python of in Bash-scripts, gebruik makend van
standaard Linux-tools. Dit geldt ook voor analyse en
presentatie van de resultaten.
Het experimenteel geautomatiseerd bijsnijden
van afbeeldingen vond plaats met ImageMagick.
Alle modellen maken gebruik van de InceptionV3-
architectuur. Er zijn tests gedraaid met andere
architecturen, zoals VGG16, ResNet50 en Xception.
Alle getrainde modellen zijn opgesla'

Anotado por optilete hace más de un año

Añade un comentario

Entra o Regístrate para añadir comentarios