How to covert alpha Image into 8 bit depth in iOS
I want to covert an image into 8 bit when I pick the image from the gallery or camera. Here is the code I have to tried. The below code is working without the alpha channel. If the image has an alpha, it's not working. And when I am picking an image from the camera there is some alpha value present so it throws an exception.
CGImageRef c = [[UIImage imageNamed:@"100_3077"] CGImage];
size_t bitsPerPixel = CGImageGetBitsPerPixel(c);
size_t bitsPerComponent = CGImageGetBitsPerComponent(c);
size_t width = CGImageGetWidth(c);
size_t height = CGImageGetHeight(c);
CGImageAlphaInfo a = CGImageGetAlphaInfo(c);
NSAssert(bitsPerPixel == 32 && bitsPerComponent == 8 && a == kCGImageAlphaNoneSkipLast, @"unsupported image type supplied");
CGContextRef targetImage = CGBitmapContextCreate(NULL, width, height, 8, 1 * CGImageGetWidth(c), CGColorSpaceCreateDeviceGray(), kCGImageAlphaNone);
UInt32 *sourceData = (UInt32*)[((__bridge_transfer NSData*) CGDataProviderCopyData(CGImageGetDataProvider(c))) bytes];
UInt32 *sourceDataPtr;
UInt8 *targetData = CGBitmapContextGetData(targetImage);
UInt8 r,g,b;
uint offset;
for (uint y = 0; y < height; y++)
{
for (uint x = 0; x < width; x++)
{
offset = y * width + x;
if (offset+2 < width * height)
{
sourceDataPtr = &sourceData[y * width + x];
r = sourceDataPtr[0+0];
g = sourceDataPtr[0+1];
b = sourceDataPtr[0+2];
targetData[y * width + x] = (r+g+b) / 3;
}
}
}
CGImageRef newImageRef = CGBitmapContextCreateImage(targetImage);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
I have tried this code. This code only converts an image into 8 bit when image CGImageAlphaInfo property is equal to kCGImageAlphaNoneSkipLast.
See also questions close to this topic
-
Breakpoint not getting hit, nor the Message extension is getting attached in iOS
I am writing a SMS filter extension for iOS. I created a
MessageFilterExtension
. The extension code isimport IdentityLookup final class MessageFilterExtension: ILMessageFilterExtension { var words: [String] = ["deal", "spam", "offer"] override init() { print("message filter init") } } extension MessageFilterExtension: ILMessageFilterQueryHandling { func handle(_ queryRequest: ILMessageFilterQueryRequest, context: ILMessageFilterExtensionContext, completion: @escaping (ILMessageFilterQueryResponse) -> Void) { //... }
I had set breakpoints in the container app did launch, in the appex
handle
and other methods. The appex does not call remote services. Then I choose the appex target, clicked run, in the attach list chose the container app as the likely target which was already mentioned. It then shows "Waiting to attach". Then I choose container app and hit run, it build and runs, and hits the breakpoint. However, the appex is not getting attached or launched. I then send some SMS texts, but it does not filter. No print statements in Xcode or Console.app. I have enabled message filter and set the Container app to filter SMS.I am testing this on iPhone 7 Plus and iOS 12.1.4. Tested on previous release of iOS, but still did not work.
Methods I tried so far:
How To Debug iOS Appex
Messages App Extension won't hit breakpointsI do not have the contact name saved. (With some combination, it worked once, but I now I am not able to debug). How do debug message filter extensions?
-
There is no payment productid in an array of purchase receipts in the app
I'm using to use Unity iap. Current Apple In App Payment Restore I'm testing
The productid and receipt are well inserted in the arrangement of the purchase receipt in the app, but over time, the payment receipt disappears and only a few non-applicable receipts are displayed.
{ "receipt":{ "receipt_type":"ProductionSandbox", "adam_id":0, "app_item_id":0, "bundle_id":"", "application_version":"0", "download_id":0, "version_external_identifier":0, "receipt_creation_date":"2019-02-14 12:53:47 Etc/GMT", "receipt_creation_date_ms":"1550148827000", "receipt_creation_date_pst":"2019-02-14 04:53:47 America/Los_Angeles", "request_date":"2019-02-18 14:20:29 Etc/GMT", "request_date_ms":"1550499629809", "request_date_pst":"2019-02-18 06:20:29 America/Los_Angeles", "original_purchase_date":"2013-08-01 07:00:00 Etc/GMT", "original_purchase_date_ms":"1375340400000", "original_purchase_date_pst":"2013-08-01 00:00:00 America/Los_Angeles", "original_application_version":"1.0", "in_app":[ { "quantity":"1", "product_id":"puzzle18", "transaction_id":"", "original_transaction_id":"", "purchase_date":"2019-02-14 12:08:20 Etc/GMT", "purchase_date_ms":"1550146100000", "purchase_date_pst":"2019-02-14 04:08:20 America/Los_Angeles", "original_purchase_date":"2019-02-14 12:08:20 Etc/GMT", "original_purchase_date_ms":"1550146100000", "original_purchase_date_pst":"2019-02-14 04:08:20 America/Los_Angeles", "is_trial_period":"false" } ] }, "status":0, "environment":"Sandbox" }
The product that I restored to buy is fuzzle11 but does not exist in the receipt.
-
Old message is not removing in iPhone notification try
I am using push plugin in ionic 3 app, everything is working fine but from my server end we are sending one message at a time and im receiving message to iPhone, but old message should overwrite with new message or once we receive new message then old message is clear automatically.. I'm not able to find anything related to this, please anybody help me to solve this.
const options: PushOptions = { android: {}, ios: { alert: 'true', badge: true, sound: 'false' }, windows: {}, browser: { pushServiceURL: 'http://push.api.phonegap.com/v1/push' } }; const pushObject: PushObject = this.push.init(options); pushObject.on('notification').subscribe((notification: any) => console.log('Received a notification', notification));
-
change the flex spacing between div with background image
hi i am remaking the google chrome home page but i cant seem to do the part at the bottom of the page were the most used apps are i am trying to do it with display flex because it puts it inline but i cant get the spacing right its uneven or to big or small here is what i want it to look like...
and here is what i get
the top one is what i need to get. notice that the spacing is to small and in the middle its to big i am looking for it to be just right justify-content doesn't seem to effect spacing? here is the css
.youtube{ background-image: url(youtube.png); } .facebook{ background-image: url(facebook.png); } .roblox{ background-image: url(roblox.png); } .Agar{ background-image: url(Agar.png); } .gmail{ background-image: url(gmail.png); } .rowCell{ justify-content: space-around; align-items: center; position: relative; background-repeat: no-repeat; width: 200px; height: 150px; margin-top: -589px; left: 422px; } .mostUsedApps{ width: 42%; display: flex; justify-content: space-between; justify-content: space-around; align-items: center; }
here is the html
<div class = 'mostUsedApps'> <div class = 'youtube rowCell'></div> <div class = 'facebook rowCell' ></div> <div class = 'roblox rowCell'></div> <div class = 'Agar rowCell'></div> <div class = 'gmail rowCell'></div> </div>
Any help is much appreciated,thanks :)
-
How to efficiently perform mean image subtraction for deep learning
To preprocess image before deep learning, I had a doubt in how to perform image mean subtraction of the images 1) Does the image mean subtraction has to be performed separately for each image or for all the N images for each channel compute the mean and then perform image mean subtraction 2) With the uint8 type dataset is it good to perform the mean subtraction or is it better to convert it to float64 before performing mean image subtraction (I performed the mean subtraction with uint8 and visually the illumination was reduced so I thought may be less features can be learned from that.
I would like to share the code I implemented for image mean subtraction
for n = 1:m temp = a(:,:,n); mean_a = mean(temp(:)); new_a(:,:,n) = temp-mean_a/(std_a); end
-
How to get images from bing web search apiv7 in my flutter application
How can I use bing web search Apiv7 to get photos in my flutter application.I got the api and end points from it's official site but I was not able to get json file to call it in item builder child..Example-child: new Image.network('${data['value']['webSearchUrl']}'.......I don't know what to put in this child and where to put the api key...
class _PageOneState extends State<PageOne> { @override Widget build(BuildContext context) { return Scaffold( body: new FutureBuilder( future: getPics(), builder: (context, snapShot){ Map data = snapShot.data; if(snapShot.hasError){ print(snapShot.error); return Text('Failed to get data from server', style: TextStyle(color: Colors.red, fontSize: 22.0),); }else if(snapShot.hasData){ return new Center( child: new ListView.builder( itemCount: data.length, itemBuilder: (context, index){ return new Column( children: <Widget>[ new Container( child: new InkWell( onTap: (){}, child: new Image.network( '${data['value']['webSearchUrl']}' ), ), ), new Padding(padding: const EdgeInsets.all(5.0)), ], ); }), ); } else if(!snapShot.hasData){ return new Center(child: CircularProgressIndicator(),); } } ), ); and below the code - Future<Map> getPics() async{ String url = 'https://api.cognitive.microsoft.com/bing/v7.0/images'; http.Response response = await http.get(url); return json.decode(response.body); }
-
How plot epoch vs val_acc and epoch vs val_loss graph in CNN?
I have used CNN for traing a datasets. Here,i get epoch,val_loss,val_acc,total loss, traning time etc. as a history. If i want to calculate the average of accuracy,then how to access the val_acc? and How to plot the epoch vs val_acc and epoch vs val_loss graph??
convnet = input_data(shape=[None, IMG_SIZE, IMG_SIZE, 3], name='input') convnet = conv_2d(convnet, 32, 3, activation='relu') convnet = max_pool_2d(convnet, 3) convnet = conv_2d(convnet, 64, 3, activation='relu') convnet = max_pool_2d(convnet, 3) convnet = conv_2d(convnet, 128, 3, activation='relu') convnet = max_pool_2d(convnet, 3) convnet = conv_2d(convnet, 32, 3, activation='relu') convnet = max_pool_2d(convnet, 3) convnet = conv_2d(convnet, 64, 3, activation='relu') convnet = max_pool_2d(convnet, 3) convnet = fully_connected(convnet, 1024, activation='relu') convnet = dropout(convnet, 0.8) convnet = fully_connected(convnet, 4, activation='softmax') convnet = regression(convnet, optimizer='adam', learning_rate=LR, loss='categorical_crossentropy', name='targets') model = tflearn.DNN(convnet, tensorboard_dir='log') if os.path.exists('{}.meta'.format(MODEL_NAME)): model.load(MODEL_NAME) print('model loaded!') train = train_data[:-150] test = train_data[-50:] X = np.array([i[0] for i in train]).reshape(-1,IMG_SIZE,IMG_SIZE,3) Y = [i[1] for i in train] test_x = np.array([i[0] for i in test]).reshape(-1,IMG_SIZE,IMG_SIZE,3) test_y = [i[1] for i in test] hist=model.fit({'input': X}, {'targets': Y}, n_epoch=8, validation_set=({'input': test_x}, {'targets': test_y}), snapshot_step=40, show_metric=True, run_id=MODEL_NAME) model.save(MODEL_NAME)
-
I need an annotation tool to label many images to create a database
i need a tool that can read a list of images one by one and in each one i want to be able to highlight the damage and then save the processed image in a file .At the same time i want to be able to label the image (damage position) and store both name of the image and its label in a csv file.The csv file should contain at the end all the names of the processed images and their labels
-
How to solve an error about 'sys/times.h' in Matlab?
I am trying to compile some code, but an error is occurred.
Error using mex correspondPixels.cc G:\matlab\agreement-master\agreement-master\functions\iprecision\support_functions\source\csa.hh(18): fatal error C1083: Cannot open include file: 'sys/times.h': No such file or directory Error in build (line 7) mex -largeArrayDims -v CXXFLAGS="\$CXXFLAGS -O3 -DNOBLAS" -outdir ../ correspondPixels.cc csa.cc kofn.cc match.cc Exception.cc Matrix.cc Random.cc String.cc Timer.cc
and the link of code is: GitHub, that the function of 'build.m' should be run.
I've also seen this link. I couldn't use the suggestion of it in the code for windows.
How do I get
sys/times.h
in Windows10? -
Converting Windows.UI.Xaml.Media.Imaging.BitmapImage to Xamarin.Forms.Image
I want to load a pdf as explained in https://blog.pieeatingninjas.be/2016/02/06/displaying-pdf-files-in-a-uwp-app/.
PDF loading and conversion works, but now I need the loaded bitmap back in my shared main-project, where I cannot use the UWP-BitmapImage.
StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(new Uri(filePath)); PdfDocument pdf = await PdfDocument.LoadFromFileAsync(file); BitmapImage firstPDF = new BitmapImage(); PdfPage page = pdfDoc.GetPage(0); using (var stream = new InMemoryRandomAccessStream()) { await page.RenderToStreamAsync(stream); await image.SetSourceAsync(stream); }
firstPDF
is now from typeWindows.UI.Xaml.Media.Imaging.BitmapImage
. But I needXamarin.Forms.Image
. -
Is there a fast way to convert a image into WEBP?
In my website i'm now converting my upload images into webp, because is smaller than the other formats, the users will load my pages more faster(Also mobile user). But it's take some time to converter a medium image.
import StringIO import time from PIL import Image as PilImage img = PilImage.open('222.jpg') originalThumbStr = StringIO.StringIO() now = time.time() img.convert('RGBA').save(originalThumbStr, 'webp', quality=75) print(time.time() - now)
It's take 2,8 seconds to convert this follow image:
860kbytes, 1920 x 1080
My memory is 8gb ram, with a processor with 4 cores(Intel I5), without GPU.
I'm using
Pillow==5.4.1
.Is there a faster way to convert a image into WEBB more faster. 2,8s it's seems to long to wait.
-
How to change the dimensions of an uploaded image and convert the changed image in to base64 url
For now I managed to change the image width but I was not able to change the base64 value that gives me the changed image. Below is my code this.fileTobeUpload = file.item(0);
var reader = new FileReader(); reader.onloadend = (event: any) => { var image = new Image(); image.src = event.target.result; image.onload = () => { image.width = 200; } this.previewImage = event.target.result; this.newsItem.image = this.previewImage } reader.readAsDataURL(this.fileTobeUpload);
-
Difference between sampling rate, bit rate and bit depth
This is kind of a basic question which might sound too obvious to many of you , but I am getting confused so bad.
Here is what a Quora user says. Now It is clear to me what a Sampling rate is - The number of samples you take of a sound signal (in one second) is it's sampling rate.
Now my doubt here is - This rate should have nothing to do with the quantisation, right?
About bit-depth, Is the quantisation dependant on bit-depth? As in 32-bit (2^32 levels) and 64-bit (2^64 levels). Or is it something else?
and the bit-rate, is number of bits transferred in one second? If I an audio file says 320 kbps what does that really mean?
I assume the readers have got some sense on how I am panicking on where does the bit rate, and bit depth have significance?
EDIT: Also find this question if you have worked with linux OS and gstreamer framework.
-
I want to change the bit depth of an image in R
I have been trying to get autocad to print a georeferenced photo in any format. Every time I get a photo from a private company it will not plot, however, geotiffs of the same format from my local government plot just fine. After testing may different things it could be, my newest idea is bit depth. The bit depth of the governments photos is 32 while the private ones are 24. I was wondering if there is a way in R, since it is the only language I know, to convert any photo types (since I can alter the photo types just fine) from 24 to 32 bit depth either through math code or a function in a package. Thank you.
-
How to reduce image bit-depth (Quantization) with Swift?
is there a way on iPhone to reduce image bit-depth. I'm trying to reduce the image size to less than 1 MB but I don't want change the image width and height. so reducing the image resolution might be the correct way to do that. Anyone know how to do that using swift?