How to measure differences in spatial densities or organisation of individuals on a plane in R?
I have two distinct datasets which look like this:
identity x-pos y-pos
1: Z 0.5 0.7
2: B 0.1 0.0
3: C 4.6 2.5
4: D 5.6 5.0
5: A 0.2 1.0
6: P 0.4 2.0
Here, the each object with a unique identity is positioned on a 2d plane and the coordinates are denoted by x-pos y-pos (in micrometers)
What I want to be able to do is to measure if objects have differences in spatial positioning/ organisation in the two datasets? This could be differences in clustering, for instance more clusters in one dataset than the other.
or it could be if the radius of each cluster in a dataset is higher than the other.
or there are more objects within a defined radius in one dataset than the other?
Is there a simple way/ r package to execute this in?
Thanks!
do you know?
how many words do you know
See also questions close to this topic
-
pivot_wider does not keep all the variables
I would like to keep the variable
cat
(category) in the output of my function. However, I am not able to keep it. The idea is to apply a similar function tom <- 1 - (1 - se * p2)^df$n
based on the category. But in order to perform that step, I need to keep the variable category.Here's the code:
#script3 suppressPackageStartupMessages({ library(mc2d) library(tidyverse) }) sim_one <- function() { df<-data.frame(id=c(1:30),cat=c(rep("a",12),rep("b",18)),month=c(1:6,1,6,4,1,5,2,3,2,5,4,6,3:6,4:6,1:5,5),n=rpois(30,5)) nr <- nrow(df) df$n[df$n == "0"] <- 3 se <- rbeta(nr, 96, 6) epi.a <- rpert(nr, min = 1.5, mode = 2, max = 3) p <- 0.2 p2 <- epi.a*p m <- 1 - (1 - se * p2)^df$n results <- data.frame(month = df$month, m, df$cat) results %>% arrange(month) %>% group_by(month) %>% mutate(n = row_number(), .groups = "drop") %>% pivot_wider( id_cols = n, names_from = month, names_glue = "m_{.name}", values_from =m ) } set.seed(99) iters <- 1000 sim_list <- replicate(iters, sim_one(), simplify = FALSE) sim_list[[1]] #> # A tibble: 7 x 7 #> n m_1 m_2 m_3 m_4 m_5 m_6 #> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 0.970 0.623 0.905 0.998 0.929 0.980 #> 2 2 0.912 0.892 0.736 0.830 0.890 0.862 #> 3 3 0.795 0.932 0.553 0.958 0.931 0.798 #> 4 4 0.950 0.892 0.732 0.649 0.777 0.743 #> 5 5 NA NA NA 0.657 0.980 0.945 #> 6 6 NA NA NA 0.976 0.836 NA #> 7 7 NA NA NA NA 0.740 NA
Created on 2022-05-07 by the reprex package (v2.0.1)
-
calculate weighted average over several columns with NA
I have a data frame like this one:
ID duration1 duration2 total_duration quantity1 quantity2 1 5 2 7 3 1 2 NA 4 4 3 4 3 5 NA 5 2 NA
I would like to do a weighted mean for each subject like this:
df$weighted_mean<- ((df$duration1*df$quantity1) + (df$duration2*df$quantity2) ) / (df$total_duration)
But as I have NA, this command does not work and it is not very nice....
The result would be this:
ID duration1 duration2 total_duration quantity1 quantity2 weighted_mean 1 5 2 7 3 1 2.43 2 NA 4 4 3 4 4 3 5 NA 5 2 NA 2
Thanks in advance for the help
-
I am to extract data from netCDF file using R for specific loaction the code i've written as showen and I have an error at the end of the code
I need some help with extracting date from NetCDF files using R , I downloaded them from cordex (The Coordinated Regional climate Downscaling Experiment). In total I have some files. This files have dimensions of (longitude, latitude, time) and the variable is maximum temperature (tasmax). At specific location, I need to extract data of tasmax at different time. In total I have some files. This files have dimensions of (longitude, latitude, time) and variable maximum temperature (tasmax). At specific location, I need to extract data of tasmax at different time.I wrote the code using R but at the end of code, an error appeared. Error ( location subscript out of bounds)
getwd() setwd("C:/Users/20120/climate change/rcp4.5/tasmax")
dir() library ("ncdf4") libra,-ry(ncdf4.helpers) library ("chron") ncin <- nc_open("tasmax_AFR-44_ICHEC-EC-EARTH_rcp45_r1i1p1_KNMI-RACMO22T_v1_mon_200601-201012.nc") lat <- ncvar_get(ncin, "lat") lon <- ncvar_get(ncin, "lon") tori <- ncvar_get(ncin, "time") title <- ncatt_get(ncin,0,"title") institution <- ncatt_get(ncin,0,"institution") datasource <- ncatt_get(ncin,0,"source") references <- ncatt_get(ncin,0,"references") history <- ncatt_get(ncin,0,"history") Conventions <- ncatt_get(ncin,0,"Conventions") tustr <- strsplit(tunits$value,"") ncin$dim$time$units ncin$dim$time$calendar tas_time <- nc.get.time.series(ncin, v = "tasmax", time.dim.name = "time") tas_time[c(1:3, length(tas_time) - 2:0)] tmp.array <- ncvar_get(ncin,"tasmax") dunits <- ncatt_get(ncin,"tasmax","units") tmp.array <- tmp.array-273.15 tunits <- ncatt_get(ncin,"time","units") nc_close(ncin) which.min(abs(lat-28.9)) which.min(abs(lon-30.2)) tmp.slice <- tmp.array[126,32981,] tmp.slice
Error in tmp.array[126, 32981, ] : subscript out of bounds
-
Why KMedoids and Hierarchical return different results?
I have a huge dataframe which only contains 0 and 1, and I tried to use the method
scipy.cluster.hierarchy
to get the dendrogram and then use the methodsch.fcluster
to get the cluster by a specific cutoff. (the metric for distance matrix is Jacccard, the method for linkage is "centroid")However, when I want to specify the optimistic numbers of clusters for my dataframe, I notice the method of KMedoids combined with the Elbow Method can help me. Then after I know the best numbers of clusters such as 2, I tried to use
KMedoids(n_clusters=2,metric='jaccard').fit(dataset)
to get clusters, but the result is different from Hierarchical method. (the reason why I don't use Kmeans is that it is too slow for my dataframe)Therfore, I did a test (the index 0,1,2,3 will be grouped):
import pandas as pd import numpy as np from scipy.spatial.distance import pdist label1 = np.random.choice([0, 1], size=20) label2 = np.random.choice([0, 1], size=20) label3 = np.random.choice([0, 1], size=20) label4 = np.random.choice([0, 1], size=20) dataset = pd.DataFrame([label1,label2,label3,label4]) dataset
Method KMedoids:
since there only are 4 indexes, so the cluster number was set to 2.
from sklearn_extra.cluster import KMedoids cobj = KMedoids(n_clusters=2,metric='jaccard').fit(dataset) labels = cobj.labels_ labels
the clustering result as shown below:
Method Hierarchical:
import scipy.cluster.hierarchy as such #calculate distance matrix disMat = sch.distance.pdist(dataset, metric='jaccard') disMat1 = sch.distance.squareform(disMat) # cluster: Z2=sch.linkage(disMat1,method='centroid') sch.fcluster(Z2, t=1, criterion='distance')
to meet the same number of clusters I tried several cutoff, the number of cluster was 2 when the cutoff was set to 1. Here is the result:
And I googled about the dataframe which was passed to KMedoids should be the original dataframe, not the distance matrix. but it seems that KMedoids will convert the original dataframe to a new one which I don't know for some reason. because I got the data conversion warning:
DataConversionWarning: Data was converted to boolean for metric jaccard warnings.warn(msg, DataConversionWarning)
I also got warning when I perform Hierarchical method:
ClusterWarning: scipy.cluster: The symmetric non-negative hollow observation matrix looks suspiciously like an uncondensed distance matrix
Purpose:
What I want is to find some method to get the clusters if I know the optimal number of clusters. but the method Hierarchical need to try different cutoff, while the KMedoids don't, but it turns a different result.
Can anybody explain this to me? And are there better ways to perform clustering?
-
R: Double Clustering of Standard Errors in Panel Regression
so i am analysing fund data. I use a fixed effect model and want to double cluster my standard errors along "ISIN" and "Date" with plm().
output for dput(data) is :
> dput(nd[1:100, ]) structure(list(Date = structure(c(1517356800, 1519776000, 1522454400, 1525046400, 1527724800, 1530316800, 1532995200, 1535673600, 1538265600, 1540944000, 1543536000, 1546214400, 1548892800, 1551312000, 1553990400, 1556582400, 1559260800, 1561852800, 1564531200, 1567209600, 1569801600, 1572480000, 1575072000, 1577750400, 1580428800, 1582934400, 1585612800, 1588204800, 1590883200, 1593475200, 1596153600, 1598832000, 1601424000, 1604102400, 1606694400, 1609372800, 1612051200, 1614470400, 1617148800, 1619740800, 1622419200, 1625011200, 1627689600, 1630368000, 1632960000, 1635638400, 1638230400, 1640908800, 1517356800, 1519776000, 1522454400, 1525046400, 1527724800, 1530316800, 1532995200, 1535673600, 1538265600, 1540944000, 1543536000, 1546214400, 1548892800, 1551312000, 1553990400, 1556582400, 1559260800, 1561852800, 1564531200, 1567209600, 1569801600, 1572480000, 1575072000, 1577750400, 1580428800, 1582934400, 1585612800, 1588204800, 1590883200, 1593475200, 1596153600, 1598832000, 1601424000, 1604102400, 1606694400, 1609372800, 1612051200, 1614470400, 1617148800, 1619740800, 1622419200, 1625011200, 1627689600, 1630368000, 1632960000, 1635638400, 1638230400, 1640908800, 1517356800, 1519776000, 1522454400, 1525046400), tzone = "UTC", class = c("POSIXct", "POSIXt")), Dummy = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0), ISIN = c("LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "LU1883312628", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "NL0000289783", "DE0008474008", "DE0008474008", "DE0008474008", "DE0008474008"), Returns = c(-0.12401, -4.15496, -1.39621, 4.46431, -2.28814, -0.58213, 3.61322, -3.56401, 0.6093, -4.73124, 0.88597, -5.55014, 5.12313, 2.65441, 1.3072, 2.99972, -5.1075, 3.51965, 0.24626, -2.21961, 4.48332, -0.03193, 2.19313, 1.81355, -2.2836, -8.3185, -14.58921, 4.47981, 4.52948, 5.51294, -2.16857, 2.56992, -2.04736, -6.17825, 14.71218, 1.24079, -1.33888, 3.5197, 8.09674, 1.43074, 3.79434, 0.47398, 1.57474, 2.48837, -3.08439, 3.68851, -2.93803, 6.43656, 2.67598, -3.39767, -5.27997, 4.76756, 4.89914, -0.95931, 2.22484, 3.01478, 1.63997, -6.64158, 3.46497, -8.54853, 7.40113, 5.68973, 1.64367, 4.35256, -5.09351, 3.43618, 2.16774, -0.77703, 3.16832, 1.65626, 4.91897, 1.76163, 1.49508, -5.16847, -9.53639, 12.74246, 3.08746, 3.4028, 0.09515, 5.66077, -2.85661, -2.58972, 9.53565, 2.93138, 0.32556, 2.92393, 5.02059, 0.98137, 0.58733, 4.91219, 2.21603, 2.52087, -3.87762, 7.66159, -0.04559, 4.48257, 2.83511, -6.27841, -3.98683, 4.99554), Flows = c(-0.312598458, -37.228563578, -119.065088084, -85.601069424, -46.613436838, -20.996760878, -12.075112555, -40.571568112, -16.210315254, -54.785115578, -55.93565336, -25.073939479, -16.513305702, -111.112262813, -17.260252326, -44.287088276, -84.358676293, -12.73665543, -14.846322594, -30.353217826, -43.002634628, -31.293725624, -32.291532262, -21.145334594, -33.460150254, -22.458849454, -34.690817528, -34.088358344, -4.069613214, -7.841523244, -6.883674001, -11.99060429, -19.155102931, -20.274682083, -33.509645025, -25.764368282, -22.451403457, -39.075362392, -9.772306537, -7.214728071, -10.462230506, -12.550102699, -0.439609898, -16.527865041, -15.938402293, -10.916678964, -11.041205907, -11.627537098, -13.797947969, -18.096144272, 29.879529566, -51.895196556, -3.192064966, -1.469562773, 9.739671656, -35.108549922, -19.490401121, 36.459406559, -66.213269625, 8.105824198, -17.078089399, -59.408458411, 1.227033593, -42.501421101, -15.275983037, 19.425363714, -23.165013159, -19.68599313, -20.478530269, -19.566890333, -19.63229278, -59.274372862, -37.128708445, 5.129404763, -2.650978954, -0.566245645, -14.80700799, 4.891308881, -18.16286654, -17.570559084, -2.726629634, -14.482219321, -35.795673521, -10.119935801, -14.37900783, -20.385053784, -4.550848701, -17.672355509, -14.270420088, 1.440911458, -8.924636198, -5.749771862, -12.284920947, -23.093834986, -13.553880939, -31.572182943, -22.977082191, -8.076560195, -11.825577374, -9.263872938), TNA = c(2474.657473412, 2327.75517961, 2171.146502197, 2175.433117247, 2082.147188171, 2042.121760963, 2031.311390907, 1918.904748403, 1914.140451001, 1765.867322561, 1724.972362171, 1600.059421422, 1605.009162592, 1539.205393073, 1540.8291693, 1538.550310809, 1370.631945404, 1404.091772234, 1351.60138448, 1290.98574898, 1309.942298579, 1280.634128059, 1278.146819041, 1281.50075434, 1189.563983023, 1062.001168646, 859.735053702, 868.096185968, 894.397805491, 933.614731653, 885.975121845, 897.018097461, 854.196359787, 781.178047528, 863.00585297, 846.859512502, 796.10866733, 784.290994645, 838.747509395, 841.511540715, 863.678978862, 854.663205271, 856.363306246, 859.460891875, 816.275861034, 836.347760358, 800.867957871, 842.657752288, 2742.709413, 2629.70296, 2518.690562, 2516.902480001, 2635.037923, 2606.124805, 2672.082125, 2715.556617, 2738.845915, 2591.318371, 2613.260789, 2396.060545001, 2554.437804, 2638.160519, 2680.990319, 2753.467368, 2533.347075001, 2637.887076, 2670.127393, 2628.138778001, 2688.643794, 2711.56785, 2823.634535001, 2811.983963001, 2835.218976, 2672.765021, 2413.332814, 2718.586512, 2727.69596, 2823.040628, 2805.482839, 2944.602701, 2855.870812, 2765.189256, 2990.804719, 3066.36598, 3059.603769, 3126.458368, 3276.612153, 3289.257788, 3291.864476, 3397.759970999, 3461.462599, 3540.518638, 3388.702548, 3622.641661, 3604.82519, 3732.115875999, 4129.617979, 3857.780349, 3687.848268001, 3858.323607), Age = c(2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 62, 62, 62, 62)), row.names = c(NA, -100L), class = c("tbl_df", "tbl", "data.frame"))
My code did yield me initially a result, i didn't change anything but all of the sudden it doesn't allow me to execute the last line of code.
library(plm) attach(nd) library(lmtest) library(stargazer) library(sandwich) library(etable) library(pacman) library(fixest) library(multiwayvcov) library(foreign) #cleaning #adjust units of TNA and Flows nd <- nd %>% mutate(TNA = TNA / 1000000, Flows = Flows / 1000000) #1mio and 1mio #drop na's #nd <- nd %>% #drop_na() #variable creation for model Y <- cbind(nd$Flows) X <- cbind(nd$Dummy, lag(nd$Returns), lag(nd$TNA), nd$Age) # descriptive statistics summary(Y) summary(X) #random effects random2 <- plm(Y ~ X, nd, model='random', index=c('ISIN', 'Date')) summary(random2) #fixed effect model fixed2 <- plm(Y ~ X, nd, model='within', index=c('ISIN', 'Date')) # Breusch-Pagan Test bptest(fixed2) #Test which model to use fixed effect or random effects #hausmann test phtest(random2, fixed2) # we take fixed effects ##Double-clustering formula (Thompson, 2011) vcovDC <- function(x, ...){ vcovHC(x, cluster="ISIN", ...) + vcovHC(x, cluster="Date", ...) - vcovHC(x, method="white1", ...) } #visualize SEs coeftest(fixed2, vcov=function(x) vcovDC(x, type="HC1")) stargazer(coeftest(fixed2, vcov=function(x) vcovDC(x, type="HC1")), type = "text")
Now, when i try to run:
coeftest(fixed2, vcov=function(x) vcovDC(x, type="HC1"))
I get the error: Error in match.arg(cluster) : 'arg' should be one of “group”, “time” Before it didn't.
I highly appreciate any answer. I'd also like to know if the formula i used for the double clustered standard errors is correct. I followed the approach from: Double clustered standard errors for panel data
- the comment from Iandorin
edit: i rewrote the code and now it works:
library(plm) attach(nd) library(lmtest) library(stargazer) library(sandwich) library(etable) library(pacman) library(fixest) library(multiwayvcov) library(foreign) #cleaning #adjust units of TNA and Flows #nd <- nd %>% #mutate(TNA = TNA / 1000000, Flows = Flows / 1000000) #1mio and 1mio #drop na's #nd <- nd %>% #drop_na() #variable creation for model Y <- cbind(nd$Flows) X <- cbind(nd$Dummy, lag(nd$Returns), lag(nd$TNA), nd$Age) # descriptive statistics summary(Y) summary(X) #random effects random2 <- plm(Y ~ X, nd, model='random', index=c('ISIN', 'Date')) summary(random2) #fixed effect model fixed2 <- plm(Y ~ X, nd, model='within', index=c('ISIN', 'Date')) # Breusch-Pagan Test bptest(fixed2) #Test which model to use fixed effect or random effects #hausmann test phtest(random2, fixed2) # we take fixed effects ##Double-clustering formula (Thompson, 2011) vcovDC <- function(x, ...){ vcovHC(x, cluster="ISIN", ...) + vcovHC(x, cluster="Date", ...) - vcovHC(x, method="white1", ...) } testamk <- plm(Y ~ X, nd, model='within', index=c('ISIN', 'Date')) summary(testamk) coeftest(testamk, vcov=function(x) vcovHC(x, cluster="group", type="HC1"))
Many thanks in advance! Joe
-
Seurat - cannot plot the same dimplot again
I am trying to rewrite the code of this paper: https://doi.org/10.1038/s42003-020-0837-0
I have written the code step-by-step based on the instructions mentioned in the methods section. But after clustering, for plotting the clusters by dimplot, I receive a dissimilar plot compared to the same plot in the paper.
I wonder what is the problem? I have tailored every parameter to receive the same plot but it hasn't worked yet.
Graph of the paper
My graph
Please help me to solve this issue. -
clip raster by SpatialPolygonDataFrame
I have a raster, and want to only retain the sea part of the raster, and remove the land part or the raster. If my raster is "ras" and my SpatialpolygonDataFRame is "worldMap", I tried
ras.msk <- rgeos::gDifference(ras,worldMap)
however, I get the following error which I do not understand, but I gather that the function can only be used with two spdf's, not with a raster?
Error in RGEOSUnaryPredFunc(spgeom, byid, "rgeos_isvalid") : rgeos_convert_R2geos: invalid R class RasterLayer, unable to convert.
if I do
r2 <- crop(ras, worldMap) r3 <- mask(r2, worldMap)
I get the land-part of the raster. How do I get the opposite so that the remaining raster excludes the overlapping spatialpolygondataframe area?
The end result I need is all raster point values at sea to be 1, and the raster point values on land to be 0.
My current code is as follows:
# Make raster layer of study area ras = raster(ext=extent(-70, -55, -60, -38), res=c(0.01,0.01)) #lat/long xmin, xmax, ymin, ymax # #give all raster points a "1" ras[] <- 1 #project the raster projection(ras) <- "+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0" # load land library(rworldmap) worldMap <- getMap(resolution = "high") projection(worldMap) <- CRS(proj4string(ras)) #crop raster by land ras.msk <- rgeos::gDifference(ras,worldMap)
-
Why the injected wavenumber can't match with the 2-D plot?
I want to inject a 2D travelling wave into a sensor system for seismic data analysis. To do this, I inject a 2D sine wave of known parameters (wave numbers
(lambda)
, angle of propagation, and the temporal frequency). I want to check the DFT figures whether the wave number is properly injected in the sensor system. I provide a sequence of python script where I express my problem could be found in the following link.My questions are:
Q1. Does the approach right to inject a 2D travelling wave in the sensor system?
Q2. If the answer is 'YES' then why the wave number can't be satisfied with the figures?
Q3. If this is not the right approach, what is the correct method to inject a 2D travelling wave into the sensor system?
Could anyone please help to find a better solution and make necessary corrections? Thanks and regards
-
How to create a matrix of evenly-spaced points within an angled polygon, given the corner coordinates [R]
Given some example random data, with UTM coordinates for each corner:
test<-structure(list(name = c("P11C1", "P11C2", "P11C3", "P11C4"), east = c(6404807.016, 6404808.797, 6404786.695, 6404784.761 ), north = c(497179.4834, 497159.1862, 497156.6599, 497176.4444 ), plot_num = c(11, 11, 11, 11)), row.names = c(NA, -4L), class = c("tbl_df", "tbl", "data.frame"))
If we plot this as a polygon. we can see a tilted rectangle (this is because this shape is generated using real differential-GPS captured coordinates on the ground):
library(ggplot2) ggplot(test) + geom_polygon(aes(east, north))
- My question is, how can I generate points among custom dimensions that are evenly spaced within this polygon? For instance, if I want to generate a grid of evenly spaced 10x11 points within this grid. Can anyone suggest a neat to do this, given the corner points? I have hundreds of discrete plots for which I would then like to loop/map a solution over. I assume this involves some simple geometry, but with the added confusion of a tilted plot, I've gotten really confused and could not find a similar solution here on SO or elsewhere! FYI in this instance I am not expecting projection to be an issue since it is UTM coordinates, but a spatial solution that accounts for global projections would be cool to see, too!