Matlab Images in double format
Matlab images can be represented in double format (either grayscale or rgb). In this format the range is 0 to 1.
When these images are represented with imshow how many colors are used? (8bit, 16bit?)
What palete/colormap is used?
What happens to the values above 1 and below 0 ?
The data is displayed in uint8 resolution, according to RGB scheme. Values over 1 and below zero are clipped and treated as white / black.
See also questions close to this topic
Cross correlation in Matlab to compute time lag for two time series
I have two time series, y1 and y2 and need to find the time lag between them using cross-correlation in Matlab. Then I need to plot the cross-correlation, align the two plots and replot. I have written a bit of Matlab code to do this but I think the cross-correlation plot is weird and I am unable to interpret it. I am not sure what I am doing wrong here, can you please help? Thanks.
Here is my code at this point:
% Generate time series t = 1:1000; y1=2*sin(2*pi*t/5); y2=2*sin(2*pi*t/5 + 2); % y2 has an introduce phase lag of 2 % Plot the two time series figure (1) plot (t,y1, 'b-', t,y2, 'r-'); axis ([0 50 -2 2]), grid; % compute the cross correlation using the function xcorr maxlag = length(y1); %# set a max lag value here [c,lags]=xcorr(y1,y2, 'coeff');% compute cross correlation figure (2); plot(lags,c)% plot lag versus correlation
Training a Neural Network for Image Classification for a task job
I have no idea how to do this. I have been given a task for a job offer that requires me to create an image classification model for Baci Chocolates. Is Binary. There is one set of damage chocolates and the other is not. The difference is not possible to see at human eyes. The resolution 1920 X 2000 but my concern is that one set is only 27 images(damages) and the other 158 for the chocolates that fine I thought to do a CNN or Residual Neural Network but I don't think they will learn from such a small set. There is any other way I can solve this I repeat this a task for a job offer and I must use this small dataset Any help will be appreacited because I am stuck.
How can I use an optimization variable to determine mathematical function?
I have an optimvar, opening, that determines three other parameters through basic arithmetic.
opening = optimvar('opening','LowerBound', -30, 'UpperBound', 30); opening_zero_geo_def = -(e+temp_thickness)/2; geo_def_des_load = -opening + opening_zero_geo_def; geo_def_rebound = geo_def_des_load - rebound; geo_def_jounce = geo_def_des_load + m_2_m_clear; po_k = (design_load/spring_rate)+geo_def_des_load; po_kl = po_k/spring_length;
Two parameters, geo_def_jounce and geo_def_rebound, are used to form a domain.
h = .1 travel = geo_def_rebound:h:geo_def_jounce;
The goal is to use po_kl and one other parameter to choose a function that the vales will be manipulated through.
if po_kl is close to .7 rate = .5*travel^3 if po_kl is close to .3 rate = .1*travel^3 + .4*travel^2 (pseudo code)
A polynomial fit is then applied to rate and the root mean squared error of the fit is calculated, and the RMSE is the minimization objective.
My question is how to do this. I've tried using an integer optimvar as an index as suggested by Alan Weiss (https://www.mathworks.com/matlabcentral/answers/374059-how-can-i-set-an-optimization-variable-to-be-an-element-of-a-set-categorical) but this doesn't work as MATLAB gives an error that optimvars cannot be used as indices.
I've also tried just passing everything with no consideration that it is an optimvar, and that does not work either because an optimvar cannot be used as a conditional statement.
I think it might be non-linear and then i'll have to a solver based approach but i'm not sure.
GDAL: Can Python GDAL.Translate convert a TIFF file to a JPEG or PNG Array--instead of writing to file?
I have python code to convert a
GeoTIFFimage file to a
JPEGimage file. However, I would like to do this in-memory, meaning that I would like to read the
GeoTIFFfile, convert it to JPEG, then store the converted data to an in-memory structure, like a
numpyarray ideally. Is there a way to do this?
Here is some sample code I was hoping to adapt:
from osgeo import gdal scale = '-scale min_val max_val' options_list = [ '-ot Byte', '-of JPEG', scale ] options_string = " ".join(options_list) gdal.Translate('test.jpg', 'test.tif', options=options_string)
This code uses the
GDALpython api, but it requires an output type referencing a file like
test.jpgin the code above. But I don't want to specify an output file--instead I want to output the result to some sort of array.
Note that I did try this conversion without using
GDALtoo, but all of the pixel intensity ranges were getting compressed to a very small range instead of the range generated from
RESEARCHED ANSWERS THAT DON'T APPLY
Note that I researched some other Stack Exchange Posts on this topic, but they all will convert the
GeoTIFFfile to another file. I don't want to write a new file. I am including the link here for convenience, but key point is that existing posted questions don't match this one.
Python: inspecting a call graph of an object hierarchy that frequently makes use of composition
OK, so: I have a package, built on PIL/Pillow (hereafter simply “Pillow”), of image processor classes. The basic idea is that an image processor is a class, the instances of which have a
process(…)method that takes one Pillow image instance and returns one (possibly mutated, possibly different) Pillow image instance.
All of the image processors in the package are based on ABCs. The root ABC looks like this:
from abc import ABC, abstractmethod class Processor(ABC): @abstractmethod def process(self, image): """ Process an image instance, per the processor instance, returning the processed image data. """ ...
(N.B., the actual working published ABC in question has a bit more stuff to it, but I am and will be simplifying things for the sake of this Q.)
And many of the simpler image processor implementations look like so:
class Brightness(Processor): # __init__() sets some parameters on self def process(self, image): # … adjust brightness … return brightened class Hue(Processor): # __init__() sets some parameters on self def process(self, image): # … adjust hue … return recolorized
… these are instantiated first (
processor = Hue(2.7)) and then applied to one or more image instances (
recolorized = processor.process(image)).
Not all of the processors are as straightforward, though: one of the most commonly-used processors in the package is the
Modeprocessor, which is based on
enum34on Python 2.7):
from enum import Enum, unique, auto @unique class Mode(Enum, Processor): MONO = auto() GRAY = auto() RGB = auto() CMYK = auto() YCbCr = auto() # … def process(self, image): if image.mode == self.to_string(): return image return image.convert(self.to_string())
(N.B. Again, the working version of this one is quite a bit more involved.)
One uses such an
enum-based processor by simply accessing one of the enumerated single instances, like so:
rgbimage = Mode.RGB.process(image).
Additionally, there are processor-containers (or container-processors, whichever comes off as less awkward) that combine Python container types with processor logic. Two (simplified) examples are the
Pipelineprocessor and the
from collections import defaultdict class Pipeline(Processor): def __init__(self, *processors): self.list = [*processors] for processor in self.list: assert callable(getattr(processor, 'process', None)) # ITS A DUCK def process(self, image): for processor in self.list: image = processor.process(image) return image class ChannelFork(defaultdict): # … lots of implementation stuff skipped … def __init__(self, processor_factory, *args, **kwargs): self.mode = kwargs.pop('mode', Mode.CMYK) super(ChannelFork, self).__init__(processor_factory, *args, **kwargs) def process(self, image): # … possibly pre-process “image” e.g. GCR bands =  for channel_label, channel in zip(self.mode.bands, mode.process(image).split()): # R, G, B; C, M, Y, K; etc. bands.append(self[channel_label].process(channel)) recomposed = Image.merge(self.mode.to_string(), bands) # … possibly post-process “recomposed” return recomposed
… in a nutshell. These all allow one to build image processing dataflows quickly, using both class hierarchy and composition. Here’s an example implementation that uses all of the above examples:
class Dither(Processor): def process(self, image): brightened = Brightness(self.brightness).process(image) monochrome = Mode.MONO.process(brightened) # … do the dithering … return dithered def ColorHalftone(Processor): def __init__(self, gcr_percentage=20, OutputMode=None, Ditherer=Dither): # Using a Pipeline in leu of instance variables: # self.channelfork = ChannelFork(Ditherer) # self.gcr = BasicGCR(percentage=gcr_percentage) # self.output_mode = output_mode self.pipeline = Pipeline(ChannelFork(Ditherer), BasicGCR(percentage=gcr_percentage), OutputMode or Mode.RGB) def process(self, image): return self.pipeline.process(image)
Now: despite the abstract vaguery of my examples, this is all highly non-theoretical; here are two images produced just now on my laptop using a processor not unlike the above
ColorHalftoneexample (click on either to see pixel-level detail):
… so yes! Everything thus far is working out great. But, as you may have gathered during my lengthy introductory example-code rollout, these processors can become a complex nesting of sub-processors and sub-sub-processors.
And so I come now to the actual question, per the post title: I am interested in developing a way to inspect the call graph of all of these
Processorsubclasses, and the interdependent network of
process(…)calls that exists between them all.
Personally I have never written anything to generate any kind of call graph. I’m only familiar with the phrase “call graph” from using utilities provided as parts of other software packages – while I’m no newcomer to programming, this is simply one of the many niches I have yet to explore.
So here is the Q: how can I graph all of my “process(…)” calls? Such that I can see when one such call contains others and when they are made serially, by what callee?
I assume that I will want to hook into the root abstract base class and it’s implementation (or lack thereof) of “process(…)”, but I am not sure where to go from there:
- What tools already exist to possibly help with this?
- What algorithmic concerns (big-O and otherwise) may come up in performing these kind of introspection operations?
- Which data-representation formats for the call graph should I consider, and which should I avoid?
- What essential utilities may there be in the Python standard library for this purpose?
I know the background and examples were fairly long, but if you’ve read this far, I appreciate whatever practical knowledge you might have in this arena.
Disparity map for objects very close to cameras in Python
I'm trying to compute a disparity map between stereo images of eye retina, i.e. the back of the eye, where the depths, proportional to the disparities, are in the range of mm.
I tried using OpenCV's StereoBM and StereoSGBM but the result, even with the tuning of numDisparities and blockSize, is low-quality, full of noise and sometimes also inconsistent. The problem may be the range of depths that is really low.
Do stereoSGBM and stereoBM have known problems and limitations when objects are extremely close to the cameras and the depth delta is in the range of few millimetres? Or, do I need just to properly tune params and do some appropriate pre-processing?