how to access neo4j apoc uuid as a string?
I am trying to use apoc.create.uuid() in nodejs for the first time. In the neo4j browser I can see that the result is a string like this:
"a1d0d202-b585-4130-ba96-4c75ca4860ca"
...but in node it appears as:
"\ta1d0d202-b585-4130-ba96-4c75ca4860ca"
this is the cypher query:
MATCH (r:Race {race_id: $race_id})
WITH r
MATCH (m:Member)-[v:HAS_VOTED]->(b)-[vr:FOR_RACE]->(r)
RETURN {votes:v.voter_choice}
console.log(JSON.stringify(result.records[0])) produces:
[[{"votes":"\ta1d0d202-b585-4130-ba96-4c75ca4860ca"},{"votes":"\ta1d0d202-b585-4130-ba96-4c75ca4860ca"},{"votes":"\ta1d0d202-b585-4130-ba96-4c75ca4860ca"},{"votes":"\ta1d0d202-b585-4130-ba96-4c75ca4860ca"},{"votes":"bd607ccd-85be-4b78-9d6e-89cbbb087d01"},{"votes":"bd607ccd-85be-4b78-9d6e-89cbbb087d01"},{"votes":"bd607ccd-85be-4b78-9d6e-89cbbb087d01"},{"votes":"bd607ccd-85be-4b78-9d6e-89cbbb087d01"},{"votes":"bd607ccd-85be-4b78-9d6e-89cbbb087d01"}]]
.
This is also not consistent across all return values...not all has this extra 2 leading characters. I am really not sure what causes this but I need these values to be consistent for comparison purposes. Can anyone explain what is going on here?
do you know?
how many words do you know
See also questions close to this topic
-
Connection failure when using airflow to run python script connecting to neo4j database
I'm trying to use airflow to orchestrate a workflow where a neo4j docker is run and then a python script to query data from the neo4j database (the code runs on AWS EC2 instance). I am able to run the neo4j docker successfully. But when I ran the task of querying the database, I got connection error as:
neo4j.exceptions.ServiceUnavailable: Connection to 127.0.0.1:7687 closed without handshake response
If I manually ran the python script on EC2, it can connect to the neo4j database without any issue. The code I am using is as below:
class Neo4jConnection: def __init__(self, uri, user, pwd): self.__uri = uri self.__user = user self.__pwd = pwd self.__driver = None try: self.__driver = GraphDatabase.driver(self.__uri, auth=(self.__user, self.__pwd)) logger.info('SUCCESS: Connected to the Neo4j Database.') except Exception as e: logger.info('ERROR: Could not connect to the Neo4j Database. See console for details.') raise SystemExit(e) def close(self): if self.__driver is not None: self.__driver.close() def query(self, query, parameters=None, db=None): assert self.__driver is not None, "Driver not initialized!" session = None response = None try: session = self.__driver.session(database=db) if db is not None else self.__driver.session() response = list(session.run(query, parameters)) except Exception as e: logger.info("Query failed:", e) finally: if session is not None: session.close() return response class LoadWikiPathway2Neo4j: def __init__(self): # replace localhost with 127.0.0.1 self.connection = Neo4jConnection(uri="bolt://localhost:7687", user="neo4j", pwd="test") def loadData(self): WIKIPATHWAY_MOUNTED_DATA_VOLUME = "/home/ec2-user/wikipathway_neo4j/data/Human/" # the volume is mounted to neo4j docker WIKIPATHWAY_DATA_DOCKER_VOLUME = "file:///var/lib/neo4j/data/Human" # file path in neo4j docker # connect db graph = self.connection # only run once graph.query('''MATCH (n) DETACH DELETE n''') graph.query('''CALL n10s.graphconfig.init()''') graph.query('''CREATE CONSTRAINT n10s_unique_uri IF NOT EXISTS ON (r:Resource) ASSERT r.uri IS UNIQUE''') graph.query('''call n10s.nsprefixes.removeAll()''') cypher = '''WITH '@prefix biopax: <http://www.biopax.org/release/biopax-level3.owl#> . \ @prefix cito: <http://purl.org/spar/cito/> . \ @prefix dc: <http://purl.org/dc/elements/1.1/> . \ @prefix dcat: <http://www.w3.org/ns/dcat#> . \ @prefix dcterms: <http://purl.org/dc/terms/> . \ @prefix foaf: <http://xmlns.com/foaf/0.1/> . \ @prefix freq: <http://purl.org/cld/freq/> . \ @prefix gpml: <http://vocabularies.wikipathways.org/gpml#> . \ @prefix hmdb: <https://identifiers.org/hmdb/> . \ @prefix ncbigene: <https://identifiers.org/ncbigene/> .\ @prefix owl: <http://www.w3.org/2002/07/owl#> . \ @prefix pav: <http://purl.org/pav/> . \ @prefix prov: <http://www.w3.org/ns/prov#> . \ @prefix pubmed: <http://www.ncbi.nlm.nih.gov/pubmed/> .\ @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . \ @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . \ @prefix skos: <http://www.w3.org/2004/02/skos/core#> . \ @prefix void: <http://rdfs.org/ns/void#> . \ @prefix wp: <http://vocabularies.wikipathways.org/wp#> . \ @prefix wprdf: <http://rdf.wikipathways.org/> . \ @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . ' as txt \ CALL n10s.nsprefixes.addFromText(txt) yield prefix, namespace RETURN prefix, namespace''' graph.query(cypher) fileLst = os.listdir(WIKIPATHWAY_MOUNTED_DATA_VOLUME) # load data for filename in fileLst: filePath = f'{WIKIPATHWAY_DATA_DOCKER_VOLUME}/{filename}' print(filePath) cypher = f'''CALL n10s.rdf.import.fetch("{filePath}","Turtle")''' logger.info(cypher) graph.query(cypher) logger.info(f"{filename} is loaded") graph.close() def load_wikipathway_files(): data = LoadWikiPathway2Neo4j() data.loadData() with DAG( default_args=default, dag_id=os.path.basename(__file__).replace(".py", ""), schedule_interval=None, catchup=False, start_date=days_ago(1), tags=['s3', 'download_wikipathway'], ) as dag: loadWikipathwayFile = PythonOperator( task_id="load_wikipathway", python_callable=load_wikipathway_files )
-
Return count of relationships instead of all
I wonder if anyone can advise how to adjust the following query so that it returns one relationship with a count of the number of actual relationships rather than every relationship? I have some nodes with many relationships and it's killing the graph's performance.
MATCH (p:Provider{countorig: "XXXX"})-[r:supplied]-(i:Importer) RETURN p, i limit 100
Many thanks
-
How to extract dynamic property names from a json file in Neo4J using Cypher Query
The tags property names are dynamic. e.g. linux and cypher are dynamic in the json. I am trying to extract the dynamic tags and their values and associate them as properties to the Person node. Here is what I have so far:
CALL apoc.load.json("file:///example.json") YIELD value MERGE (p:Person {name: value.name}) ON CREATE SET p.job = value.job, p.department = value.department RETURN p as Person;
example.json:
{ "name": "Harry", "job": "Developer", "tags": { "linux": "awesome", "cypher": "Working on it" }, "department": "IT" }
-
how to ensure atomicity in neo4j cypher writes:
I have never used the transaction lock features in neo4j cypher before so I not sure how to do this. I have a series of writes that I need all to complete or none complete:
CREATE (e:Elections) SET e +=$electionObj, e.election_id=apoc.create.uuid(),e.createdAt=date() WITH e, $nominate_authorizers AS na UNWIND na AS n_auth MATCH (nm:Member {member_id: n_auth.member_id}) CREATE (nm)-[nr:NOMINATE_AUTHORIZER]->(e) WITH e, nm, $certify_authorizers AS ca UNWIND ca AS c_auth MATCH (cm:Member {member_id: c_auth.member_id}) CREATE (cm)-[cr:CERTIFY_AUTHORIZER]->(e) WITH e, nm,cm, $tally_authorizers AS ta UNWIND ta AS t_auth MATCH (tm:Member {member_id: t_auth.member_id}) CREATE (tm)-[tr:TALLY_AUTHORIZER]->(e) WITH e, nm,cm, tm, $audit_authorizers AS aa UNWIND aa AS a_auth MATCH (am:Member {member_id: a_auth.member_id}) CREATE (am)-[ar:AUDIT_AUTHORIZER]->(e) WITH e, nm,cm, tm, am, $races AS races UNWIND races AS race CREATE (r:Race) SET r.name= race, r.race_id=apoc.create.uuid() CREATE (r)-[rr:OF_ELECTION]->(e) RETURN {election: e}
The problem is ...if there's an error that causes one of these MATCH's to return 0 row the query moves right along and creates the nodes it found. What is the most efficient of achieving atomicity here. I have looked at the apoc library but not sure about it....any suggestion would be appreciated.
-
Select nodes based on a keyword or phrase using Full-text search index not working
My DB has around 2 million nodes(10 different types of nodes) and 3 million relationships in total.
Problem:
I want to run a query on the DB and select a set of nodes (similar type) based on the presence of a keyword(or phrase) in one of their fields called 'Description'.
'Description' field is plaintext (around 10 lines of text). So, I tried to make a Full-text search index on this field using the following command.
CREATE FULLTEXT INDEX Descriptions FOR (n:Nodename) ON EACH [n.Description]
This command completes after only 1 second without any errors, and I can see the newly created index in the list of indexes. But when I try to run my query, nothing returns. I guess there is something wrong with the index because I believe index creating should take some time in a massive DB like mine.
I used the following command to search and return the nodes:
CALL db.index.fulltext.queryNodes("Descriptions", "Keyword or Phrase") YIELD node, score RETURN node.Description, score
No records were retrieved after the above command, but I am sure that I have hundreds of matches in my DB. Any idea about this problem? Or any other solution for a fuzzy text search on a field based on a keyword/phrase?