Modules not recognized when running script windows task manager
I have a script named Sku_Matching_Salesforce.py. The location of the file is :
C:\Users\User\Desktop\Personal\DABRA\Sku_Matching_Salesforce.py
this is the VENV that is activated:
C:\Users\User\Desktop\Personal\DABRA\venv
If I run the script, it works fine, but when I run it via Windows Task Manager, it doesn't recognize Modules. I get this error message:
Traceback (most recent call last):
File "C:\Users\User\Desktop\Personal\DABRA\Sku_Matching_Salesforce.py", line 5, in <module>
import matplotlib.pyplot as plt
ModuleNotFoundError: No module named 'matplotlib'
Traceback (most recent call last):
File "C:\Users\User\Desktop\Personal\DABRA\Sku_Matching_JFS-Beta1.py", line 5, in <module>
import sqlalchemy
ModuleNotFoundError: No module named 'sqlalchemy'
Traceback (most recent call last):
File "C:\Users\User\Desktop\Personal\DABRA\Sku_Matching_Solodeportes_Beta1.py", line 5, in <module>
import sqlalchemy
ModuleNotFoundError: No module named 'sqlalchemy'
Traceback (most recent call last):
File "C:\Users\User\Desktop\Personal\DABRA\Sku_Matching_OpenSports_Beta1.py", line 5, in <module>
import sqlalchemy
ModuleNotFoundError: No module named 'sqlalchemy'
Traceback (most recent call last):
File "C:\Users\User\Desktop\Personal\DABRA\Unificador_Salida_Final.py", line 6, in <module>
import sqlalchemy
ModuleNotFoundError: No module named 'sqlalchemy'
This is how I configured the task in windows manager in "Actions" label:
Program or Script: C:\Windows\py.exe
Optional Arguments : Allscript.py (is the file that runs the script)
Starts in : C:\Users\User\Desktop\Personal\DABRA\
I don´t know what the problem is...Could someone help me please?
Thanks in advance
do you know?
how many words do you know
See also questions close to this topic
-
Python File Tagging System does not retrieve nested dictionaries in dictionary
I am building a file tagging system using Python. The idea is simple. Given a directory of files (and files within subdirectories), I want to filter them out using a filter input and tag those files with a word or a phrase.
If I got the following contents in my current directory:
data/ budget.xls world_building_budget.txt a.txt b.exe hello_world.dat world_builder.spec
and I execute the following command in the shell:
py -3 tag_tool.py -filter=world -tag="World-Building Tool"
My output will be:
These files were tagged with "World-Building Tool": data/ world_building_budget.txt hello_world.dat world_builder.spec
My current output isn't exactly like this but basically, I am converting all files and files within subdirectories into a single dictionary like this:
def fs_tree_to_dict(path_): file_token = '' for root, dirs, files in os.walk(path_): tree = {d: fs_tree_to_dict(os.path.join(root, d)) for d in dirs} tree.update({f: file_token for f in files}) return tree
Right now, my dictionary looks like this:
key:''
.In the following function, I am turning the empty values
''
into empty lists (to hold my tags):def empty_str_to_list(d): for k,v in d.items(): if v == '': d[k] = [] elif isinstance(v, dict): empty_str_to_list(v)
When I run my entire code, this is my output:
hello_world.dat ['World-Building Tool'] world_builder.spec ['World-Building Tool']
But it does not see
data/world_building_budget.txt
. This is the full dictionary:{'data': {'world_building_budget.txt': []}, 'a.txt': [], 'hello_world.dat': [], 'b.exe': [], 'world_builder.spec': []}
This is my full code:
import os, argparse def fs_tree_to_dict(path_): file_token = '' for root, dirs, files in os.walk(path_): tree = {d: fs_tree_to_dict(os.path.join(root, d)) for d in dirs} tree.update({f: file_token for f in files}) return tree def empty_str_to_list(d): for k, v in d.items(): if v == '': d[k] = [] elif isinstance(v, dict): empty_str_to_list(v) parser = argparse.ArgumentParser(description="Just an example", formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument("--filter", action="store", help="keyword to filter files") parser.add_argument("--tag", action="store", help="a tag phrase to attach to a file") parser.add_argument("--get_tagged", action="store", help="retrieve files matching an existing tag") args = parser.parse_args() filter = args.filter tag = args.tag get_tagged = args.get_tagged current_dir = os.getcwd() files_dict = fs_tree_to_dict(current_dir) empty_str_to_list(files_dict) for k, v in files_dict.items(): if filter in k: if v == []: v.append(tag) print(k, v) elif isinstance(v, dict): empty_str_to_list(v) if get_tagged in v: print(k, v)
-
Actaully i am working on a project and in it, it is showing no module name pip_internal plz help me for the same. I am using pycharm(conda interpreter
File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\Scripts\pip.exe\__main__.py", line 4, in <module> File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\__init__.py", line 4, in <module> from pip_internal.utils import _log
I am using pycharm with conda interpreter.
-
Looping the function if the input is not string
I'm new to python (first of all) I have a homework to do a function about checking if an item exists in a dictionary or not.
inventory = {"apple" : 50, "orange" : 50, "pineapple" : 70, "strawberry" : 30} def check_item(): x = input("Enter the fruit's name: ") if not x.isalpha(): print("Error! You need to type the name of the fruit") elif x in inventory: print("Fruit found:", x) print("Inventory available:", inventory[x],"KG") else: print("Fruit not found") check_item()
I want the function to loop again only if the input written is not string. I've tried to type return Under print("Error! You need to type the name of the fruit") but didn't work. Help
-
Python3: Import Module over Package
I'm looking for suggestions to import a module over a package in Python3. I don't have control over the names of the files so changing them is not an option. Additionally, the submodules are external github repositories that I will constantly pull updates from, hence cannot change the contents of feature.py
This is my directory structure:
foo __init__.py bar_1.py bar_2.py submodules s_module_1 foo.py feature.py s_module_2 s_module_3 main.py
Contents of feature.py
... from foo import foo_methods # import from foo.py module ...
Contents of main.py
from foo import * # imports from foo package import sys sys.path.insert(0, 'submodules/s_module_1') from feature import * # results in error
When I run main.py, the error I get is
ImportError: cannot import name 'foo_methods' from 'foo' (foo/__init__.py)
Any suggestions to resolve this would be appreciated!
-
How to return a reactive dataframe from within a shiny module that depends on a button click?
Aim: Return a reactive dataframe object from within the module named "modApplyAssumpServer" Problem: I am getting an endless loop. Even if I wrap everything within the observeevent logic within isolate()
I have included another table in the app code below to indicate a simplified version of the logic that works outside of the module framework but that I can't seem to get to work within the module.
library(shiny) library(dplyr) df_agg_orig <- data.frame(proj_1 = c(2,3)) modGrowthInput <- function(id) { ns <- NS(id) tagList( numericInput(ns("first"),label = "Assumption",value = 100), ) } modGrowthServer <- function(id, btnGrowth) { moduleServer(id, function(input, output, session) { list( first = reactive({input$first}) ) }) } modButtonUI <- function(id,lbl = "Recalculate"){ ns <- NS(id) actionButton(inputId = ns("btn"),label = lbl)#,style = "pill",color = "primary",no_outline = T,size = "xs" } modButtonServer <- function(id){ moduleServer(id, function(input, output, session) { reactive({input$btn}) }) } modApplyAssumpServer <- function(id,btnGrowth, df_agg,case_vals){ moduleServer(id, function(input, output, session) { stopifnot(is.reactive(btnGrowth)) stopifnot(is.reactive(df_agg)) mod_vals <- reactiveVal(df_agg()) observeEvent(btnGrowth(),{ isolate({mod_vals(df_agg() %>% mutate(proj_1 = proj_1*input$first))}) print("Looping problem...") }) mod_vals() }) } #### Test App GrowthInputApp <- function() { ui <- fluidPage( sidebarPanel(modGrowthInput("tst"),modButtonUI("tstGrowth")), mainPanel(fluidRow( splitLayout( DT::DTOutput("no_module"),DT::DTOutput("module_tbl"))))) server <- function(input, output, session) { btnGrowth <- modButtonServer("tstGrowth") case_vals <- modGrowthServer("tst") df_agg <- reactiveValues(df_wide = df_agg_orig) #Outside of module test exhibiting expected/desired behavior (at least if the looping issue would let it do so :) observeEvent(btnGrowth(),{ df_agg$df_wide$proj_1 <- round(df_agg$df_wide*case_vals$first(),2) }) output$no_module <- DT::renderDT({DT::datatable(rownames = F,df_agg$df_wide,caption = "Not Updated Within Module")}) output$module_tbl <- DT::renderDT({DT::datatable(rownames = F,modApplyAssumpServer("tst",btnGrowth = btnGrowth,df_agg = reactive({df_agg_orig})),caption = "Table Returned From Module")} ) } shinyApp(ui, server) } runApp(GrowthInputApp())
-
Module does not provide an export named default - compiled typescript module
I'm developing a node npm module in typescript, and after I compile it to commonjs and try to import it, I get the error:
SyntaxError: The requested module 'woo-swell-migrate' does not provide an export named 'default'
But... it does have a default export. Here is the compiled index.js file:
"use strict"; var __importDefault = (this && this.__importDefault) || function (mod) { return (mod && mod.__esModule) ? mod : { "default": mod }; }; Object.defineProperty(exports, "__esModule", { value: true }); const woocommerce_rest_api_1 = __importDefault(require("@woocommerce/woocommerce-rest-api")); const swell_node_1 = __importDefault(require("swell-node")); const path_1 = __importDefault(require("path")); class WooSwell { /** * * @param config - required params for connecting to woo and swell * * @param dirPaths - directory paths to store json files and images in * @param dirPaths.data - directory to store json files in * @param dirPaths.images - directory where wordpress image backup is stored */ constructor(config, dirPaths) { this.swell = swell_node_1.default.init(config.swell.store, config.swell.key); this.woo = new woocommerce_rest_api_1.default({ consumerKey: config.woo.consumerKey, consumerSecret: config.woo.consumerSecret, url: config.woo.url, version: config.woo.version }); this.wooImages = {}; this.paths = { wooImageFiles: dirPaths.images, wooImageJson: path_1.default.resolve(dirPaths.data, 'woo-images.json'), wooProducts: path_1.default.resolve(dirPaths.data, 'woo-products.json'), swellCategories: path_1.default.resolve(dirPaths.data, 'swell-categories.json') }; } /** * gets all records from all pages (or some pages, optionally) of endpoint * * @param endpoint - example: '/products' * * @param options - optional. if not provided, will return all records from all pages with no filters * * @param options.pages - supply a range of pages if not needing all - example: { first: 1, last: 10 } * * @param options.queryOptions - Swell query options, limit, sort, where, etc. See https://swell.store/docs/api/?javascript#querying * * @returns - record array */ async getAllPagesSwell(endpoint, options) { const res = await this.swell.get(endpoint, options === null || options === void 0 ? void 0 : options.queryOptions); let firstPage = (options === null || options === void 0 ? void 0 : options.pages.first) || 1; let lastPage = (options === null || options === void 0 ? void 0 : options.pages.last) || Object.keys(res.pages).length; let records = []; for (let i = firstPage; i <= lastPage; i++) { const res = await this.swell.get(endpoint, Object.assign(Object.assign({}, options === null || options === void 0 ? void 0 : options.queryOptions), { page: i })); records.push(...res.results); } return records; } /** * gets all records from all pages of endpoint * * @param endpoint example: 'products' * * @param options - optional. * * @param options.pages - supply a page range if not loading all pages { start: 10, end: 15 } * * @returns - record array */ async getAllPagesWoo(endpoint, options) { var _a, _b; const res = await this.woo.get(endpoint); const firstPage = ((_a = options === null || options === void 0 ? void 0 : options.pages) === null || _a === void 0 ? void 0 : _a.first) || 1; const lastPage = ((_b = options === null || options === void 0 ? void 0 : options.pages) === null || _b === void 0 ? void 0 : _b.last) || parseInt(res.headers['x-wp-totalpages']); const records = []; for (let i = firstPage; i <= lastPage; i++) { records.push(...(await this.woo.get(endpoint, { page: i })).data); } return records; } } exports.default = WooSwell;
It's there... right at the bottom.
exports.default = WooSwell
. So why am I getting this error?Here is my package.json:
{ "name": "woo-swell-migrate", "version": "1.0.0", "description": "", "main": "./dist/index.js", "types": "./dist/index.d.ts", "type": "module", "scripts": { "build": "tsc", "test": "jest --config jestconfig.json" }, "keywords": [], "license": "ISC", "dependencies": { "@woocommerce/woocommerce-rest-api": "^1.0.1", "dotenv": "^16.0.0", "es2017": "^0.0.0", "image-size": "^1.0.1", "mime-types": "^2.1.35", "swell-node": "^4.0.9", "ts-jest": "^28.0.1" }, "devDependencies": { "@types/mime-types": "^2.1.1", "@types/jest": "^27.5.0", "@types/woocommerce__woocommerce-rest-api": "^1.0.2", "@types/node": "^17.0.31", "jest": "^28.0.3" } }
and my
tsconfig.json
:{ "compilerOptions": { "target": "es2017", "module": "commonjs", "declaration": true, "outDir": "./dist", "esModuleInterop": true, "moduleResolution": "node", "strict": true }, "include": ["src"], "exclude": ["node_modules", "**/__tests__/*"] }
- Is quotes needed for parameter in Windows Task Scheduler?
-
Hazelcast cluster member crash results in loosing all scheduled tasks
We are running 4 instances of our java application in hazelcast cluster. We scheduled around 2000 task using schedule executor service schedule method. Hazelcast partition all these 2000 tasks across the 4 instances. Due to some reason one of the cluster member crashes then all the task that are assign to the partition that are owned by the crashed node are lost, rest all 3 cluster member completed their assign task.
So how can we overcome this problem to avoid the lost tasks.
-
Get previous package versions with conda
Given a specific package name when I run
conda list my-package-name
I get its current version. What's a simple way / command to get instead the history of the package versions I've installed in the current environment ?
-
Why archived venv during azure pipelines created with venv-pack has corrupted python interpreter?
I want to use archived environment for spark-submit, but after unpacking on k8s cluster it has corrupted python interpreter
-
virtualenv not activated on windows 11
Im using python 10 and windows-11 i try to activate venv with the following command
.\onlineShop\Scripts\activate.bat
I create venv using following command
python -m venv onlineShop
My pip list
Package Version ------------ ------- distlib 0.3.4 filelock 3.6.0 pip 22.0.4 platformdirs 2.5.2 pyaes 1.6.1 Pyrogram 2.0.17 PySocks 1.7.1 setuptools 58.1.0 six 1.16.0 virtualenv 20.14.1
also when Im using
.\onlineShop\Scripts\activate
gives me this Error"cannot be loaded because the execution of scripts is disabled on this system".
Problem : venv is not activated
-
Why are these import errors occurring when running python scripts from cmd or windows task scheduler, but not anaconda?
I am encoutering import errors, but only when running my python scripts from cmd or windows task scheduler (effectively the same issue I assume). I have researched answers already and attempted various solutions (detailed below), but nothing has worked yet. I need to understand the problem in any case so that I can manage anything like it in the future.
Here is the issue:
Windows 10. Anaconda Python 3.9.7. Virtual enviromnent.
I have a script that works fine if I open an anaconda prompt, activate the virtual environment and run it.
However, this is where the fun starts. If I try to run the script from the non-anaconda cmd prompt deploying the commands: "C:\Users\user\anaconda3\envs\venv\python.exe" "C:\Users\user\scripts\script.py" if get the following error:
ImportError: DLL load failed while importing etree: The specified module could not be found. Traceback includes: "C:\Users\user\anaconda3\envs\venv\lib\site-packages\lxml\html\__init__.py", line 53, in <module> from ..import etree
This is not as simple as one specific module not being installed, because of course running the script from within the anaconda prompt and the virtual environment works. Similar also happens when I run other scripts. Other errors I have seen include, for example:
ImportError: DLL load failed while importing _imaging: The specified module could not be found. Traceback includes: "C:\Users\user\anaconda3\envs\venv\lib\site-packages\PIL\Image.py", line 114, in <module> from . import _imaging as core
Also, I think this may be somehow related. Importing numpy (1.22.3) from within the python interpreter in the virtual environment works fine, but when I try to run a test script that imports numpy it fails both from anaconda and the cmd with the following error:
ImportError: cannot import name SystemRandom
The oveall issue was noted originally when trying to run various scripts from Windows Task Scheduler with the path to python "C:\Users\user\anaconda3\envs\venv\python.exe" entered as the Program/script and the script "script.py" entered as an argument. The above errors were produced, then reproduced by running the scripts from a non-anaconda cmd.
I am looking to understand what is happening here and for a solution that can get the scripts running from the virtual enviroment from Windows Task Scheduler effectively.
Update:
I have uninstalled and reinstalled numpy (and pandas) using conda. This has left the venv with numpy==1.20.3 (and pandas=1.4.2). On attempting to re-run one of the scripts, it runs fine from within the venv in anaconda, but produces the following error when attempting to run from cmd or from within Windows Task Scheduler as above:
ImportError: Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions faled. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have complied some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.9 from "C:\Users\user\anaconda3\envs\venv\python.exe" * The NumPy version is "1.20.3" and make sure that they are the versions you expect. Please carefull study the documentation linked above for further help. Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.
I have looked into the solutions suggested, but am still completely at a loss, especially as to why the script runs from the venv in one place, but NOT the other.
-
bat file in Windows scheduler not running python script
I am trying run a python script to update a ppt presentation. I have also tried this a year ago with running a regression and updating a table in SQL and didn't run either. I gave up then as I couldn't resolve it.
I have managed to create a bat file to run R code in windows scheduler and that works.
I have created the bat file and tested it in command prompt and it the py file runs and updates the ppt presentation.
When I run this bat file in windows scheduler is doesn't update the ppt.
Currently the bat file is as follows:
@echo off SET log_file=C:\python\logfile.txt echo on call :logit >>log_file=% exit /b 0 :logit call C:\ProgramData\Anaconda3\Scripts\activate.bat cd C:\python\ python Updateppt.py
These are the things I have tried so far:
- Added a log file to the bat file. The log file is created and adds the three steps so I know the bat file is run. The log file returns this:
C:\python>call C:\ProgramData\Anaconda3\Scripts\activate.bat (base) C:\python>cd C:\python\ (base) C:\python>python Updateppt.py
- Edited the bat file to various combinations based on recommendations from stack overflow. Most of them have worked in command prompt but none work in windows scheduler
- Check the security settings on the folder where I am saving the information and I have full access
- Made sure the folder is added to the PYTHONPATH in both system and user sections for environment variables
- Have an R file that currently runs via a bat file through windows scheduler so have made sure all the general, conditions and settings sections in the properties match that one
- re-run pip install on all packages to make sure they are accessible and in the right location when the py file runs. This was based on this advice: Cannot schedule a python script to run through Windows Task Scheduler
- Timed the command prompt and windows scheduler tasks and the command prompts takes 30 seconds whereas the windows scheduler takes 20 seconds
- Added logging into python file and it logs when script is started and it logs a time when in running in windows scheduler so it is running the python script
Is there anything I can do to get this working? I am really at a loss with this and I can't seem to find a stack overflow response that actual solves the issue I am having
UPDATE
I have added times after each function is run and right before the last function, the log file shows that when it is run in windows scheduler, it doesn't run the last function but instead loops back to the first one. It doesn't do this in command prompt
windows scheduler run log of python
INFO:root:run script started at 2022-04-29 13:18:31.318567 INFO:root:loaded enc data at 2022-04-29 13:18:32.072627 INFO:root:create enc_id at 2022-04-29 13:18:32.075627 INFO:root:agg data at 2022-04-29 13:18:59.782707 INFO:root:run script started at 2022-04-29 13:19:22.904437 INFO:root:loaded enc data at 2022-04-29 13:19:23.225462 INFO:root:create enc_id at 2022-04-29 13:19:23.228464
command prompt log of python
INFO:root:run script started at 2022-04-29 13:20:48.871881 INFO:root:loaded enc data at 2022-04-29 13:20:49.051893 INFO:root:create enc_id at 2022-04-29 13:20:49.054894 INFO:root:agg data at 2022-04-29 13:21:05.040096 INFO:root:run script stopped at 2022-04-29 13:21:05.436125
It should aggregate the data and then export to ppt and the script will stop and run the 'run script stopped' line. Why would it be running it correctly in command prompt but not windows scheduler?
This is the code it's not running
def update_ppt(CHW_daily): daily_figures = Presentation(ResultPath+'Template/daily_figures_template.pptx') # CHW table slide_CHW = daily_figures.slides[0] table_CHW = [shape for shape in slide_CHW.shapes if shape.has_table] #Then we can update the values in each cell directly from the dataframe: for i in range(1,8): for j in range(0,6): table_CHW[0].table.cell(i,j).text = str(CHW_daily.iloc[i-1, j]) table_CHW[0].table.cell(i,j).text_frame.paragraphs[0].font.size = Pt(14) daily_figures.save(ResultPath+'daily_figures.pptx') return()
-
problems with Rmarkdown word document format when rendered from taskscheduleR in Rstudio
I have problems with formatting of my Rmarkdown word documents when I render them as part of an automatic task configured in R studio using taskscheduleR. More precisely, page breaks disappear. Does anyone know how I can solve this problem.