Unable to create secret api key for Google maps
I am working with pix2pix neural network and need to generate some data from google maps. The script requires API Key and Secret API Key.
I tried to enable restrictions on my API (to my IP) however I am definitely doing smth wrong. I can't find any new Key created.
If any more info needed I will try to elaborate.
1 answer
-
answered 2022-01-19 17:56
DazWilkin
The (API) Key is shown in both screenshots and begins
AIza
.You should delete the API Key that you included in the screenshots and create another
Please be very careful about sharing data like this because it is a so-called bearer token which means that anyone (the bearer) who has it can use it (although you are trying to enforce API and IP restrictions which is good practice).
It's possible someone could infer the entire key from both screenshots that you've included. Deleting it will void it. Please create another and don't share it with anyone that doesn't require it.
I'm unfamiliar with pix2pix but all API keys should be treated as secret(s). Google only provides one value for API keys so there's no "API Key" and "Secret API Key" only "API Key" that should always be treated as secrets.
do you know?
how many words do you know
See also questions close to this topic
-
Images with Firebase and Python Flask API
I am currently developing an API using Firebase from google and Python's Flask libraries. It is a proyect where I am in need of saving images to the DB and then adress them in the API. I would also like to know how to relate the image to an item in the database, say posting an image of Juan, and that is linked with ALL the information from Juan inside the DB. Thanks!
-
Need help finding an api or real world data related to read time of people
I need to create a project for which I require real world data of reading time per page or article. Can anyone help me find data or api in which I can get this data I tried to search for some api but they had information related to books instead of their reading time of a person.
-
Using Websocket Channel in React
I am working on a project where I have to make an api call to a websocket, and display certain information based on that call. As you can see I have used the subscribe portion of the channel in my call. Now I'm not sure what to do if anything with the 2nd two set of bracket. Do I need to include them my call as well, or do they come along with the subscription? And from which of them would I be using the information I need to display? In other words which one is the information I am recieving?
'''
const ws = new WebSocket("wss://ws-feed.exchange.coinbase.com"); const apiCall = { type: "subscribe", product_ids: [ "ETH-USD", "BTC-USD" ], channels: ["level2"] }; ws.onopen = (event) => { ws.send(JSON.stringify(apiCall)); }; ws.onmessage = function (event) { const json = JSON.parse(event.data); console.log(`[message] Data received from server: ${json}`); };
'''
-
how to determine if a list of zip codes are no more than x miles away from a given city using Google Maps API?
I need to determine if a list of zip codes (eg: [22400, 92037, 92107, 33101, 10001] )
Are no more than x miles away (eg: 100 miles) from a given city (eg: Los Angeles).
So the result would be only [22400, 92037, 92107].
Is it possible using the Google Maps API? If not, any other similar API to do this?
-
county boundaries for US map with large data
I am trying to load all the county boundaries in the initial load. So I researched and found that loadGeoJson will support that. when I test with state boundaries it’s working fine. But when I test with county boundaries only few boundaries are loading. Can you tell me how to fix it. Providing my code snippet and sandbox below. I have updated the county boundaries in the gist https://gist.githubusercontent.com/js08/c6f73019ad29ad1c0324658c2ea36bb3/raw/a49f5989f5f6fdcde9dd686e7a72f0b9921780cb/USCounties.json
https://codesandbox.io/s/sharp-meninsky-1y5jk7?file=/index.ts
map.data.loadGeoJson( // "https://storage.googleapis.com/mapsdevsite/json/states.js", "https://gist.githubusercontent.com/js08/c6f73019ad29ad1c0324658c2ea36bb3/raw/a49f5989f5f6fdcde9dd686e7a72f0b9921780cb/USCounties.json", { idPropertyName: "STATE" } );
-
Flutter - 3 questions about Google Maps Implementation
I am implementing Google Maps to my Flutter App. It's ok. But I need to customize some features. The easiest way will screenshots below.
In Screenshot I, that is a screen after the map is shared through a text messaging and the receiver click that link. There is a upper case "A" and a vertical line right next to the letter. But in Google Maps App, there is a photo instead of "A".
Question 1: How can I put a phot related to the address???
If you see the marker, it has a white outline.
Question 2: How can I remove the white outline and resize it?
In Screenshot 2, that screen is a text messaging sent by my App. You see the coordinates. But there should be the name of a location or address.
Question 3: How can I put the name of a location or address?
I tried to figure these out but no luck so far. I may just need some hints or code!
Thank you in advance for your help!!
Screenshot I
https://drive.google.com/file/d/1RTiXm9GHRmWPs2gs8RPU8GFPGmEJgg8n/view?usp=sharing
Screenshot II
https://drive.google.com/file/d/1sMt0sOlyMhjN5huTnxDzJx41bFJttcWW/view?usp=sharing
-
Google Cloud Compute Engine http Connection Timeout
I have setup a compute engine VM with 2vCPU and 2GB RAM.I have setup nginx server and setup the firewalls permissions as shown in the diagram. When I try to access the angular files hosted on the server using the external IP I get the error "The connection has timed out" and when I try to use curl on the terminal, it displays the error "curl: (28) Failed to connect to IP port 80 after 129163 ms: Connection timed out".
Both the Http and Https firewall rules are enabled
Whe I run the command
sudo systemctl status apache2
netstat -tulpn | grep LISTEN
enter code here
Any ideas on what the issue might be will be really helpful
-
(Terraform) Error 400: Invalid request: instance name (pg_instance)., invalid
On GCP, I'm trying to create a Cloud SQL instance with this Terraform code below:
resource "google_sql_database_instance" "postgres" { name = "pg_instance" database_version = "POSTGRES_13" region = "asia-northeast1" deletion_protection = false settings { tier = "db-f1-micro" disk_size = 10 } } resource "google_sql_user" "users" { name = "postgres" instance = google_sql_database_instance.postgres.name password = "admin" }
But I got this error:
Error: Error, failed to create instance pg_instance: googleapi: Error 400: Invalid request: instance name (pg_instance)., invalid
Are there any mistakes for my Terraform code?
-
Apache beam FixedWindow doesn't do anything after GroupByKey transform
I built a pipeline which reads from confluent kafka it processes the records and then use side outputs to split them into rejected and approved pcollections, then the approved pcollections gets written to bigquery, but I want to persist the approved records and write them into a file on gcs.
The code is:
windowing=(aproved | 'Create_window' >> beam.WindowInto(window.FixedWindows(60)) | 'AddKey' >> beam.Map(lambda record: (none,record)) | 'GBK' >> beam.GroupByKey() | 'remove_key' >> beam.FlatMap(ret_key) | 'AddTimeStamp' >> beam.Map(lambda record: beam.window.TimestampValue(record,time.time())) | 'Write' >> WriteToFiles(path=MY_BUCKET,file_naming=destination_prefix_naming('.ppl')) )
This works when I test it reading from a file and using direct runner, but when I use dataflow and streaming it just doesn't do anything after the GroupByKey transform, it says on the graph that 20 element were added, but the next transform ('remove_key') never gets an element after that
-
Save GAN generated images
I'm new to learning python. I saw a code on the Internet that saves the generated gan images. But I need these generated images to be saved to a folder in Google Coollaboratory (Colab). How do I do this?
def generate_and_save_images(model, epoch, test_input): predictions = model(test_input, training=False) fig = plt.figure(figsize=(4, 4)) for i in range(predictions.shape[0]): plt.subplot(4, 4, i+1) plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray') plt.imsave('image_at_epoch_{:04d}-{}.png'.format(epoch, i), predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray') plt.axis('off') plt.savefig('image_at_epoch_{:04d}.png'.format(epoch)) plt.show()
-
Error in layer of Discriminator Model while making a GAN model
I made a GAN model for generating the images based on sample training images of animes. Where on the execution of the code I got this error.
ValueError: Input 0 of layer "discriminator" is incompatible with the layer: expected shape=(None, 64, 64, 3), found shape=(64, 64, 3)
Even changing the shape of the 1st layer of the discriminator to
(None, 64, 64, 3)
did not helpCode:
Preprocessing:
import numpy as np import tensorflow as tf from tqdm import tqdm from tensorflow import keras from tensorflow.keras import layers img_h,img_w,img_c=64,64,3 batch_size=128 latent_dim=128 num_epochs=100 dir='/home/samar/Desktop/project2/anime-gan/data' dataset = tf.keras.utils.image_dataset_from_directory( directory=dir, seed=123, image_size=(img_h, img_w), batch_size=batch_size, shuffle=True) xtrain, ytrain = next(iter(dataset)) xtrain=np.array(xtrain) xtrain=np.apply_along_axis(lambda x: x/255.0,0,xtrain)
Discriminator model:
discriminator = keras.Sequential( [ keras.Input(shape=(64, 64, 3)), layers.Conv2D(64, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Flatten(), layers.Dropout(0.2), layers.Dense(1, activation="sigmoid"), ], name="discriminator", ) discriminator.summary()
Generator Model:
generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), layers.Dense(8 * 8 * 128), layers.Reshape((8, 8, 128)), layers.Conv2DTranspose(128, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(256, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(512, kernel_size=4, strides=2, padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(3, kernel_size=5, padding="same", activation="sigmoid"), ], name="generator", ) generator.summary()
Training:
opt_gen = keras.optimizers.Adam(1e-4) opt_disc = keras.optimizers.Adam(1e-4) loss_fn = keras.losses.BinaryCrossentropy() for epoch in range(10): for idx, real in enumerate(tqdm(xtrain)): batch_size=real.shape[0] random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) with tf.GradientTape() as gen_tape: fake = generator(random_latent_vectors) if idx % 100 == 0: img = keras.preprocessing.image.array_to_img(fake[0]) img.save("/home/samar/Desktop/project2/anime-gan/gen_images/generated_img_%03d_%d.png" % (epoch, idx)) with tf.GradientTape() as disc_tape: loss_disc_real = loss_fn(tf.ones((batch_size,1)), discriminator(real)) loss_disc_fake = loss_fn(tf.zeros((batch_size,1)), discriminator(fake)) loss_disc = (loss_disc_real + loss_disc_fake) / 2 gradients_of_discriminator = disc_tape.gradient(loss_disc, discriminator.trainable_variables) opt_disc.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) with tf.GradientTape() as gen_tape: fake = generator(random_latent_vectors) output = discriminator(fake) loss_gen = loss_fn(tf.ones(batch_size, 1), output) grads = gen_tape.gradient(loss_gen, generator.trainable_weights) opt_gen.apply_gradients(zip(grads, generator.trainable_weights))
And also can you please explain me the difference between the shapes (None, 64, 64, 3) and (64, 64, 3)
-
Unprogressive GAN loss function
I am trying to train a Deep Convolutional GAN (DCGAN) in PyTorch with 3D density data of shape (128,128,128) which lies in the range (-1,1700). I standardized and then normalized the data so that now it lies in the range (-1,1). I am using learning rates of 1e-5 for both the discriminator and the generator. The latent vector is a standard normal distribution, the loss function is a BCE Loss but it seems that the loss function is saturating and staying the same after some initial iterations:
The Disciminator and Generator networks have the following architecture:
class GeneratorNet(nn.Module): def brick(self, inchn, outchn, k, s, p, bias=False): return nn.ConvTranspose3d(inchn, outchn, k, s, p, bias=False) def postop(self, chn): return nn.Sequential( nn.BatchNorm3d(chn), nn.ReLU(True),) def __init__(self, ngpu): super(GeneratorNet, self).__init__() self.ngpu = ngpu #--input or z is BS,200 self.linear1 = torch.nn.Linear(200, 256) #---reshape into BS,256,1,1,1 self.brick1 = self.brick (256, 128, k=4, s=2, p=1) #---BS,128,2,2,2 self.postop1 = self.postop(128) self.brick2 = self.brick (128, 64, k=4, s=2, p=1) #---BS,64,4,4,4 self.postop2 = self.postop(64) self.brick3 = self.brick (64, 32, k=4, s=2, p=1) #---BS,32,8,8,8 self.postop3 = self.postop(32) self.brick4 = self.brick (32, 16, k=4, s=2, p=1) #---BS,16,16,16,16 self.postop4 = self.postop(16) self.brick5 = self.brick (16, 8, k=4, s=2, p=1) #---BS,8,32,32,32 self.postop5 = self.postop(8) self.brick6 = self.brick (8, 4, k=4, s=2, p=1) #---BS,4,64,64,64 self.postop6 = self.postop(4) self.brick7 = self.brick (4, 1, k=4, s=2, p=1) #---BS,1,128,128,128 self.postop7 = self.postop(1) self.final_activation = torch.nn.Tanh() def forward(self, z): #--Generator takes z in the shape of BS,200 y = self.linear1(z) bs,lv = y.shape y = y.reshape(bs, lv, 1, 1, 1) y = self.brick1(y) y = self.postop1(y) y = self.brick2(y) y = self.postop2(y) y = self.brick3(y) y = self.postop3(y) y = self.brick4(y) y = self.postop4(y) y = self.brick5(y) y = self.postop5(y) y = self.brick6(y) y = self.postop6(y) y = self.brick7(y) y = self.final_activation(y) return y class DiscriminatorNet(nn.Module): def brick(self, inchn, outchn, k, s, p, bias=False): return nn.Conv3d(inchn, outchn, k, s, p, bias=False) def firstpostop(self): return nn.Sequential( nn.LeakyReLU(0.2, inplace=True),) def postop(self, chn): return nn.Sequential( nn.BatchNorm3d(chn, affine=True), nn.LeakyReLU(0.2, inplace=True),) def __init__(self, ngpu): super(DiscriminatorNet, self).__init__() self.ngpu = ngpu #--input is BS, 1, 128,128,128 self.brick1 = self.brick (1, 4, k=4, s=2, p=1) #--BS,4,64,64,64 self.postop1 = self.postop(4) self.brick2 = self.brick (4, 8, k=4, s=2, p=1) #--BS,8,32,32,32 self.postop2 = self.postop(8) self.brick3 = self.brick (8, 16, k=4, s=2, p=1) #--BS,16,16,16,16 self.postop3 = self.postop(16) self.brick4 = self.brick (16, 32, k=4, s=2, p=1) #--BS,32,8,8,8 self.postop4 = self.postop(32) self.brick5 = self.brick (32, 64, k=4, s=2, p=1) #--BS,64,4,4,4 self.postop5 = self.postop(64) self.brick6 = self.brick (64, 128, k=4, s=2, p=1) #--BS,128,2,2,2 self.postop6 = self.postop(128) self.brick7 = self.brick (128, 256, k=2, s=1, p=0) #---BS,256,1,1,1 self.linear1 = torch.nn.Linear(256*1*1*1, 1) self.final_activation = torch.nn.Sigmoid() def forward(self, g_out): y = self.brick1(g_out) y = self.postop1(y) y = self.brick2(y) y = self.postop2(y) y = self.brick3(y) y = self.postop3(y) y = self.brick4(y) y = self.postop4(y) y = self.brick5(y) y = self.postop5(y) y = self.brick6(y) y = self.postop6(y) y = self.brick7(y) bs,ch,h,w,d = y.shape y = y.reshape(bs,ch*h*w*d) y = self.linear1(y) y = self.final_activation(y) return y
Discriminator is updated only if the D(G(z)) > PROB, where 3 different values of PROB were tested. The training loopis as follows:
PROB = 0.0 #---Select from 0.0 (standard), 0.2 and 0.25 ngpu = 2 batch_size = 4 num_workers = 4 nz = 200 G_lr, G_betas = 1e-5, (0.5,0.999) #5e-5, 0.5 #--0.0001, beta=0.6 D_lr, D_betas = 1e-5, (0.5,0.999) #5e-5, 0.5 num_epochs = 50 print_freq = 200 save_freq = 2000 iters = 1 real_label = 1. fake_label = 0. fixed_noise = torch.normal(mean=0.0, std=1.0, size=(2,nz), device=main_collector) D_optimizer = optim.Adam(Discriminator.parameters(), lr=D_lr, betas=D_betas) G_optimizer = optim.Adam(Generator.parameters() , lr=G_lr, betas=G_betas) criterion = torch.nn.BCELoss() for epoch in range(num_epochs): for i, data in enumerate(mydataloader, 1): ############################## ######## DISCRIMINATOR ####### ############################## Discriminator.zero_grad() #---Set gradients to 0. Use model.zero_grad() if >1 optimizers for the same model real_data = data[0].to(main_collector) #---get real data and conditional parameters cbs = real_data.shape[0] #-----------current batch size (because batch size can be different at the end) label = torch.full((cbs,), real_label, dtype=torch.float, device=main_collector) #; print('real_data :', #--TRAIN DISCRIMINATOR WITH ALL-REAL BATCH real_validity = Discriminator(real_data).view(-1) D_loss_real = criterion(real_validity, label) D_loss_real.backward() D_x = real_validity.mean().item() #--TRAIN DISCRIMINATOR WITH ALL-FAKE BATCH z = torch.normal(mean=0.0, std=1.0, size=(cbs,nz), device=main_collector) fake_data = Generator(z) label.fill_(fake_label) fake_validity = Discriminator(fake_data.detach()).view(-1) D_loss_fake = criterion(fake_validity, label) D_loss_fake.backward() D_G_z1 = fake_validity.mean().item() #--TOTAL DISCRIMINATOR LOSS D_loss = D_loss_real + D_loss_fake if D_G_z1 > PROB: D_optimizer.step() ############################## ######## GENERATOR ########### ############################## Generator.zero_grad() label.fill_(real_label) # [1,1,1,1,1,...] fake labels are real for generator cost output = Discriminator(fake_data).view(-1) # We just updated D, so perform another forward pass of all-fake batch through D G_loss = criterion(output, label) #---Calculate G's loss based on this output G_loss.backward() D_G_z2 = output.mean().item() G_optimizer.step() ############################## ######## DETAILS ########### ############################## if i % print_freq == 0: print('Epoch: %d/%d \tIteration: %d/%d \tD_loss: %.3f, G_loss: %.3f, \tD(x): %.4f, D(G(z)): %.4f/%.4f' % (epoch+1, num_epochs, iters, int(num_epochs*num_batches), D_loss.item(), G_loss.item(), D_x, D_G_z1,D_G_z2)) if (iters % save_freq == 0): with torch.no_grad(): fake_data = Generator(fixed_noise).detach().cpu().numpy() img_list.append(fake_data) np.save("img_list.npy", np.array(img_list)) iters += 1
The generated images (right) are not even close to the ones trained on (left).
It seems that there is something inherently incorrect with my approach. What am I doing wrong here?