Android app data analysis over supermarket deals images
For an university project i need to create an Android application that collects and aggregates data from various supermarket flyers in order to filter deals by user inserted details.
As inspiration i am looking at commercial applications and i came across Shopfully that have a really interesting feature.
Their app is able to recognize the images of the flyers placing a floating plus button on each product.
Example:
If the user press on the plus floating button product details are showed (price, discount amount):
I am wondering how they made that? I imagine that they use some sort of image recognition to process flyers and an OCR text recognition to parse the text.
Would any of you have any ideas on how I might be able to reproduce this feature?
do you know?
how many words do you know
See also questions close to this topic
-
Updating a Single Column In Room Database
That's the function I'm using for updat
private fun updateSettingsDatabase(settingsDao: SettingsDao) { lifecycleScope.launch { settingsDao.update(SettingsEntity( 1, nightMode=nightModeResult, )) } } @Query("SELECT * FROM `settings-table`") fun fetchCurrentSettings(): Flow<List<SettingsEntity>>
I specified
nightMode=
because I thought that this way I'm only updating this colummn, but it turns out that it resets every column, how do I update a single column, while keeping the values the rest of the columns? -
Using EdittextPreference for Goto search
sorry for my poor English. I want to use EditTextPreference in my bottom nav like the pic below, ![screenshot][1]
I have recycleview xml in my project with in many sub cardview layouts(which is scrollable) and I want to create item in the bottom nav which is called "Goto". When the "Goto" item is clicked i want it to pop-up like the screenshot. And when user enters a number(according to the cardviews i.e if the number of cardview is 40 user must enter 1-40) I want to search the cardview by its ID. Thank you and I hope u got it, If u have any questions let me know [1]: https://i.stack.imgur.com/grK8P.jpg
My xml format look like this. As you see in the blow since the cardviews are huge in number it is not cool to scroll all the way down that is why i need Goto item in the bottom nav to search it by its ID when the user click number in the EditTextPreference as u see in the screenshot. i.e The screenshot is not from my app
<LinearLayout> <LinearLayout> <androidx.cardview.widget.CardView> <RealtiveLayout> <Textview/> <RealtiveLayout> </androidx.cardview.widget.CardView> </LinearLayout> <LinearLayout> <androidx.cardview.widget.CardView> <RealtiveLayout> <Textview/> <RealtiveLayout> </androidx.cardview.widget.CardView> </LinearLayout> <LinearLayout> <androidx.cardview.widget.CardView> <RealtiveLayout> <Textview/> <RealtiveLayout> </androidx.cardview.widget.CardView> </LinearLayout> <LinearLayout> <androidx.cardview.widget.CardView> <RealtiveLayout> <Textview/> <RealtiveLayout> </androidx.cardview.widget.CardView> </LinearLayout> .. .. .. .. many more..
-
IOS Launcher in android studio
I'm trying to change the whole Android OS installed app icons into IOS icons, please help me with the proper code or library for android kotlin
-
Image Background Remover Using Python
I want to make Image Background Remover Using Python But I do not know how much data It will take and time to reach the accuracy of remove.bg I'm using U-2-Net Ai models https://github.com/xuebinqin/U-2-Net/ Some results are same but not every result is as good enough as remove.bg In a rating I would tell my app as 2/5 and remove.bg as 4/5 Please tell me How can I achieve accuracy like remove.bg Any help or suggestions are appreciated. Thanks
-
Is there a way to guarantee a certain number of lines detected with cv2.HoughLines()?
This question is an extension to my previous question asking about how to detect a pool table's corners. I have found the outline of a pool table, and I have managed to apply the Hough transform on the outline. The result of this Hough transform is below:
Unfortunately, the Hough transform returns multiple lines for a single table edge. I want the Hough transform to return four lines, each corresponding to an edge of the table given any image of a pool table. I don't want to tweak the parameters for the Hough transform method manually (because the outline of the pool table might differ for each image of the pool table). Is there any way to guarantee four lines to be generated by
cv2.HoughLines()?
Thanks in advance.
-
How to make a chatbot for discord using python
I need advise and/or resources to make a chatbot for discord in python, i have some knowledge of python and the discord api but I know nothing about chat bots or how to implement them in python, can anyone lead me to resources about chatbots and artificial intelligence?
-
how to print all parameters of a keras model
I am trying to print all the 1290 parameters in
dense_1
layer, butmodel.get_weights()[7]
only show 10 parameters. How could I print all the 1290 parameters ofdense_1
layer? What is the difference betweenmodel.get_weights()
andmodel.layer.get_weights()
>model.get_weights()[7] array([-2.8552295e-04, -4.3254648e-03, -1.8752701e-04, 2.3482188e-03, -3.4848123e-04, 7.6121779e-04, -2.7494309e-06, -1.9068648e-03, 6.0777756e-04, 1.9550985e-03], dtype=float32) >model.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 32) 320 conv2d_1 (Conv2D) (None, 24, 24, 64) 18496 max_pooling2d (MaxPooling2D (None, 12, 12, 64) 0 ) dropout (Dropout) (None, 12, 12, 64) 0 flatten (Flatten) (None, 9216) 0 dense (Dense) (None, 128) 1179776 dropout_1 (Dropout) (None, 128) 0 dense_1 (Dense) (None, 10) 1290 _________________________________________________________________ ================================================================= Total params: 1,199,882 Trainable params: 1,199,882 Non-trainable params: 0 _________________________________________________________________
-
Problem with recognizing numbers with easyocr
I have been trying to read numbers using easyocr but depending on the font, it gives different result. For example: For the numbers in the first line, it shows correct output, but the one at the bottom
, it doesn't read 0, 1 and 7 correctly. Is there a way to solve the problem? I have tried some morphological operations, blur, binary conversion, etc. None worked for the font at the bottom.
-
Extracting Handwritten text values from a image using Tesseract-OCR library Python
Looking for extracting handwritten text from uploaded image. I am tried using OCR library Tesseract() in Java API/Python with pytesseract. DataPath, Language eng.trainingdata are already set.
ITesseract _tesseract = new Tesseract(); result = _tesseract.doOCR(file);
Input is like a form which is filled by human (black-white form). Image contains printed as well as hand-written content. The output string contains part of the content from the given input image file.jpg. A majority of data is missing in the
result
variable String.I would like to get all values printed as well as hand-written on the input form.
P.S I am tried with Python as well as.
pytesseract.pytesseract.tesseract_cmd= r'C://Program Files/Tesseract-OCR/tesseract.exe' print(pytesseract.image_to_string(r'sample1.jpg'))
Eg: Results from both approach returns either wrongly-spelled or missed out data.
-
Please explain the function of axis=0 in the following line of code
I've read in a tutorial that the following line of code is being used to drop/remove columns with null values.
data = data.dropna(axis=0)
What I don't get is the purpose ofaxis=0
, how dose it work? -
Modelling breastfeeding duration
I have a dependent variable Y= Number of complete months a child was breastfed together with some independent variables X1,X2,X3. Colleagues suggest time series models/ multiple linear regression/Logistic regression(after categorizing). From my point of view, I believe Poisson/negative binomial regressions are appropriate since this is like a count data.
Which model do you think I should run? Please, provide literature if there is any?