Finding a Pertained Autoencoder Model
I am trying to use an autoencoder for image compression. So far, I have trained my own model (based on the TF2.0 tutorial) to do so with bad results. I was thinking that a pretrained model could give me much better results. I've looked in Tensorflow Hub and Model Zoo among others so far to no results. Could anyone suggest me a good source for this sort of thing?
See also questions close to this topic

List of xy coordinates to predict a xy target
I have a database containing coordinate points (X, Y) .. A column corresponds to a single coordinate => For n points, I therefore have 2n columns following the model: X1, Y1, X2, Y2, ..., Xn, Yn
Each line corresponds to a polygon, described by a sequence of coordinates (X, Y) (These are my features) For each line, I have an output Target which is a point of XY coordinates, therefore composed of 2 outputs (TX & TY)
My output corresponds to the location of a target to predict in (X, Y) according to the points
I have already done some work on the database by transforming all these coordinates of Points into coordinates of Vectors (to give binder) This allowed me to build linear regression models especially with Ridge, But the predictions made do not satisfy me, I would like to be more precise ..
Have you ever worked on a similar problem? If so, I am looking for some avenues to explore, otherwise your ideas will be welcome
The aim of my subject is to predict the location of electrical outlets in a given room. For that I translated my room into a polygon, and the electric outlet into a Target T.

Clustering group with few samples
I would like a feedback from a person with more experience.
I have a dataframe that is in the image format that I sent, with about 1 million samples and 50 features.
What I'm looking for are customers similar to customers who own 'Product A'. So I thought about use dummies on categorical variables and then doing a clustering. Problem: The number of customers who own 'Product A' represents about 1% of all customers, so I am not sure if a cluster will be able to separate the group I am looking for. Is clustering appropriate in this case? If so, do you know the most efficient algorithm in this case? I worked only with Kmeans and I don't know if it would be ideal for having to inform the number of clusters I want to form.

What should I do if there are too many zero values in the outlier handling part?
I am working on a data science project which is about Churn analysis(whether costomer is leaving or not). I am trying to do outlier handling part but i have a question about how i need to think when my data has many zero values. I know it may contain a meaning but please see the results below. Results ,Value Counts, z scorehard edges and outliers
I would like to ask what should i need to do for better results and should i keep all the zero values? Any suggestion? What should I do if there are too many zero values in the outlier handling part?

I am confused by structure of DeepLab v3
I am a beginner in the field of deep learning. The Problem of Semantic Segmentation Draw my Attention and I headed out to apply the state of the art Deeplab v3 Algorithm. Right now I am having hard time grasping the concept of Deeplab v3. In a typical tutorial post, they explain it by this picture Atrous Convolution at output stride of 16 at rate of 2.
Atrous Convolutions are meant to reduce the computational efficiency by increasing the filters field of view. But I don't get where they placed the Atrous convolution on the backbone of pretrained ResNet101.
Also, I am not able to understand the concept of ASPP modules. As mentioned in the tutorial they are used for improving the contextual reasoning of the network. But when it comes to applying it practically they only show this Image. I tried to apply the ASPP module but got confused in the dimension size.
Please explain the structure of deeplabv3.

Warp Image by Diagonal Sine Wave
I'm trying to warp colour image using sin function in OpenCV and I was successful in doing so. However, how can I make a 'diagonal' warping using sine wave?
My code is this:
Mat result = src.clone(); for (int i = 0; i < src.rows; i++) { // to y for (int j = 0; j < src.cols; j++) { // to x for (int ch = 0; ch < 3; ch++) { // each colour int offset_x = 0; int offset_y = (int)(25.0 * sin(3.14 * j / 150)); if (i + offset_y < src.rows) { result.at<Vec3b>(i, j)[ch] = src.at<Vec3b>((i + offset_y) % src.rows, j)[ch]; } else result.at<Vec3b>(i, j)[ch] = 0.0; } } } imshow("result", result);
How can I warp my image like this picture? Not draw a graph, but warp an image.

AttributeError: module 'tensorflow._api.v1.config' has no attribute 'set_visible_devices'
First of all, I apologize that my English is not good for you to understand. Currently, I am doing computer vision using tensorflow version 1.14. In the process, the following problem ocurred in the process of rotating the model using GPU.
AttributeError: module 'tensorflow._api.v1.config' has no attribute 'set_visible_devices'
The current development environment is as follows.
 Python: 3.7.9
 conda: 4.8.3
 tensorflow: 1.14.0
 keras: 2.3.1
In addition, I currently have 4 gpu, and i want to use 2 gpu as if it were 1 gpu. Can you give me a good idea for this?
thank you.

Lossless compression of a sequence of similar grayscale images
I would like to have the best compression ratio of a sequence of similar grayscale images. I note that I need an absolute lossless solution (meaning I should be able to check it with an hash algorithm).
What I tried
I had the idea to convert my images into a video because there is a chronology between images. The encoding algorithm would compress using the fact that not all the scene change between 2 pictures. So I tried using ffmpeg but I had several problems due to sRGB > YUV colorspace compression. I didn't understand all the thing but it's seems like a nightmare.
Example of code used :
ffmpeg i %04d.png c:v libx265 crf 0 video.mp4 #To convert into video ffmpeg i video.mp4 %04d.png #To recover images
My second idea was to do it by hand with imagemagik. So I took the first image as reference and create a new image that is the difference between image1 and image2. Then I tried to add the difference image with the image 1 (trying to recover image 2) but it didn't work. Noticing the size of the recreated picture, it's clear that the image is not the same. I think there was an unwanted compression during the process.
Example of code used :
composite compose difference 0001.png 0002.png diff.png #To create the diff image composite compose difference 0001.png diff.png recover.png #To recover image 2
Do you have any idea about my problem ? And why I don't manage to do the perfect recover with iamgemagik ?
Thanks ;)
Here are 20 samples images : https://cloud.damien.gdn/d/f1a7954a557441989432/

Compression algorithms for nearly uniform data
I've seen questions on compression algorithms around SE, but none quite fit what I'm looking for. Clearly truly uniformly distributed data cannot be compressed, but how close can we get?
My (probably incorrect) thoughts: I would imagine that by transforming the data (normalizing in some way?), you could accentuate the nonuniformity aspects of nearly uniform data and then use that transformed set to compress, perhaps along with the inverse transform or its parameters. But maybe I'm totally wrong and they all perform equally terribly as the data approaches uniformity?
When I look at lists of (lossless) compression algorithms, I don't see them ranked by how effective they are against certain types of data, at least not in any concrete terms. Does anyone know of a source that dives into this?
As background, I have an application where the data set is not independent, but nevertheless appears to be nearly uniform (most of the symbols have very low frequencies, and none of them have very high frequencies). So I was wondering if there are algorithms that can exploit the sampling dependence even if the data frequencies are mostly low. Then of course it would be more helpful to have a source that detailed exactly why some compression algorithms might perform better at this than others, if such a thing existed.

How to write a hashmap to a file in a memory efficient format?
I am writing a Huffman Coding/Decoding algorithm and I am running into the problem that the storing the Huffman tree is taking up way to much room. Currently, I am converting the tree into a hashMap as such > hashMap<Character(s),Huffman Code> and then storing that hash map. The issue is that, while the string is compressed great, adding the Huffman Tree data stored in the hash map is adding so much overhead that it's actually ending up bigger than the original. Currently I am just naively writing [data, value] pairs to the file, but I imagine there must be some sort of trickier way to do that. Any ideas?