Iterate over json with inspec and only target specific entries for tests
To test if the correct consul version is installed on my servers, I am trying to write an inspec test. The idea is to call this test with mutiple ip addresses, parse it against json (which I get from the consul api) and then check the consul version for only the servers that I have the IPs listed for.
Now I am struggling with iterating over the json and then only checking the correct servers.
It should look something like this:
describe json(content: http('http://172.17.0.1:8500/v1/agent/members').body) do
for i in 0..2 do
ip_addresses.each { |node| its ([i, 'Tags', 'build']) { should match consul_version }.where(['Addr'] == node) }
end
It doesn't work of course, but I think it explains what I am trying to achieve.
Is there an approach that would allow me to do this?
Edit: The json boils down to this:
[
{
"Addr": "192.100.100.100”,
"Name": “machine0”,
"Tags": {
"build": "1.11.3",
}
},
{
"Addr": "192.100.100.101”,
"Name": “machine1",
"Tags": {
"build": "1.11.3”,
}
},
{
"Addr": "192.100.100.102”,
"Name": “machine2”,
"Tags": {
"build": "1.11.3",
}
}
]
do you know?
how many words do you know
See also questions close to this topic
-
Whats missing on my Ruby 'Inverse Of' relationship
I know this topic has been addressed, but I have been at this for 2 days and I'm just stuck. I know inverse of does not create a new query, so should I use another method?
Question: How to set up an 'inverse of' with a has_one, belongs_to situation & same class..
Explanation: A user 'has_one :spouse' and 'belongs_to :spouse_from'. They are inverse of each other. When a User signs up, they can invite their significant other. For Example
- user_a invites & creates user_b
- user_b.spouse_id is set to user_a.id
- In a separate method I want to be able to update like.. user_a.spouse_id = user_a.spouse.id
The only association that works at this point is user_b.spouse.
Class User has_one :spouse, class_name: 'User', foreign_key: :spouse_id, dependent: :nullify, inverse_of: :spouse_from belongs_to :spouse_from, class_name: 'User', foreign_key: :spouse_id, inverse_of: :spouse, optional: true
-
Library functions in Inspec Test
Team --
I have a ruby library helper in which I have defined multiple functions/methods. How do I reference those in
Chef
Inspec
tests?def interface_name # some code with logic end
In my specific use case, I am checking to see if a custom networking device was specified as a JSON parameter, and if it was, I am validating it make sure its real (incase they misspelled it) and also gathering its IP and state and other data as reported by
Ohai
This is what I have so far, but I'm not sure if this is correct
describe file('/path/to/custom_attributes.json') do it { should exist } unless json('/path/to/custom_attributes.json').empty? do its(['networking']['interface_name']) { should_not be_empty } interface_name = file(json('/path/to/custom_attributes.json').params['networking']['interface_name']) end end describe file('/etc/NetworkManager/system-connections/wired_connection') do unless interface_name.empty? its('content') { should_not match(/^interface-name=wl.*/mx) } end its('content') { should match(%r{ca-cert=/etc/ssl/certs/ca-certificates\.crt}mx) } its('content') { should match(/id=\.corp.*type=ethernet.*autoconnect-priority=100.*dns-search=corp\.domain.com.*/mx) } end end
The problem / question is that if I gather the parameter directly from the JSON file, then I am bypassing all the validation logic that I'm doing in the library, which defeats the purpose. So, how do I get access to that library function/method in the Inspec test?
For reference, here is the function:
def interface_name file = '/path/to/custom_attributes.json' if File.exist?(file) && !File.stat(file).zero? attributes = JSON.parse(File.read(file)) device_name = attributes['networking']['interface_name'] if device_name && !device_name.empty? && networking_devices.include?(device_name) interface = device_name Chef::Log.info("Valid custom interface provided, using \"#{device_name}\".") else Chef::Log.debug("Invalid interface (\"#{device_name}\") provided. Valid options are: \"#{networking_devices.keys}\"") interface = nil end else Chef::Log.debug('No custom interface provided.') end interface rescue JSON::ParserError nil end
-
Writing structured facts
I've wrote some ruby code which will run on a linux server and return details about the server as a fact. It does this by connecting to amazon and retrieving some json (it runs two separate commands one to retrieve a list of disks - e.g /dev/sda1, /dev/xvdb and then it maps this to a volumeID via another query).
I've made some small amendments to the output and added some values I'm interested in. The code runs multiple times and returns multiple hashes (one for each disk - maybe I should merge them?). Anyway here's an example of a server which has two disks below (this is just some debug output):
Hash is {"/dev/sda1"=>{"disk_mapping"=>"", "is_lvm_partitioned"=>"false", "volumeid"=>"vol1234"}}. Hash is {"/dev/xvdb1"=>{"disk_mapping"=>"xvdb1", "is_lvm_partitioned"=>"true", "volumeid"=>"vol5678"}}.
The next thing I want to to is turn this into a structured fact (with the devices: /dev/sda1, /dev/xvdb1 as the "keys"). Here's a rough idea of how I've done it (I've skipped a lot of the irrelevant code).
json_string = { "#{path}" => { "disk_mapping" => "#{disk_mapping}", "is_lvm_partitioned" => "#{is_lvm_partitioned}", "volumeid" => "#{getVolumes}" } }.to_json hash = JSON.parse(json_string) if hash.is_a? Hash then debug_msg("Hash is #{hash}.") hash["#{path}"].each do |key, child| debug_msg("Setting key: #{key} child: #{child}.") end end
I've never really wrote any ruby before so this is copied from multiple places, but besides aggregated facts I can't find a way to do this; I've tried to do something like this:
Facter.add(:test["#{path}"]["#{key}"]) do setcode do #{child} end end
So I guess in order I want to know:
- Should I merge the hash somehow? I originally assumed I did but found this incredibly hard due to not knowing how many hash's I'd have.
- Should I be using an aggregated fact or a "standard" one?
- How do I retain the structure of the hash and then call it with a single query (e.g facts test).
- Any examples which are similar to my code (the puppet ones I've found quite hard to follow).
Here's what I'm looking for at the end:
[root@blah ~]# facter test { /dev/sda1 => { disk_mapping => "", is_lvm_partitioned => "false", volumeid => "vol1234" }, /dev/xvdb => { disk_mapping => "xvdb1", is_lvm_partitioned => "true", volumeid => "vol5678" } }
Thanks.
-
(Terraform) Error 400: Invalid request: instance name (pg_instance)., invalid
On GCP, I'm trying to create a Cloud SQL instance with this Terraform code below:
resource "google_sql_database_instance" "postgres" { name = "pg_instance" database_version = "POSTGRES_13" region = "asia-northeast1" deletion_protection = false settings { tier = "db-f1-micro" disk_size = 10 } } resource "google_sql_user" "users" { name = "postgres" instance = google_sql_database_instance.postgres.name password = "admin" }
But I got this error:
Error: Error, failed to create instance pg_instance: googleapi: Error 400: Invalid request: instance name (pg_instance)., invalid
Are there any mistakes for my Terraform code?
-
How does Kubernetes and Terraform work seamlessly together and what role do they each undertake?
I am a bit confused about the individual roles of Kubernetes and Terraform when using them both on a project.
Until very recently, I had a very clear understanding of both their purposes and everything made sense to me. But, then I heard in one of Nana's videos on Terraform, that Terraform was also very advanced in orchestration and I got confused.
Here's my current understanding of both these tools:
Kubernetes: Orchestration software that controls many docker containers working together seamlessly. Kubernetes makes sure that new containers are deployed based on the desired infrastructure defined in configuration files (written with the help of a tool like Terraform, as IaC).
Terraform: Tool for provisioning, configuring, and managing infrastructure as IaC.
So, when we say that Terraform is a good tool for orchestration, do we mean that it's a good tool for orchestrating infrastructure states or docker containers as well?
I hope someone can clear that out for me!
-
Automate Azure Devops (FTP Upload) and Git to upload on Remote Server
The current setup is as below
- Version Control - Git
- Repos and Branch hosted on - Azure DevOps
- Codebase - External server
The dev team clones Azure Repo into local git project and any staged changes are committed via Git and pushed to specific branch of Azure DevOps. In this setup we would want to upload the changes to external FTP servers and avoid manual upload. Currently trying to use Azure Devops FTP Upload Task (https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/ftp-upload?view=azure-devops), however facing issues; yaml script as below
trigger: - main pool: vmImage: 'ubuntu-latest' variables: phpVersion: 7.4 webAppName: 'Test Project' buildConfiguration: 'Release' vmImageName: 'ubuntu-latest' steps: - publish: $(System.DefaultWorkingDirectory)/AzureRepoName artifact: Test Project Deploy - task: FtpUpload@2 displayName: 'FTP Upload' inputs: credentialsOption: inputs serverUrl: 'ftps://00.00.00.00:22' username: ftp-username password: ftp-password rootDirectory: '$(System.DefaultWorkingDirectory)/AzureRepoName' remoteDirectory: '/home/public_html' clean: false cleanContents: false preservePaths: true trustSSL: true
PROBLEM
Following errors occur when I commit (for test purposes) something.
Starting: PublishPipelineArtifact ============================================================================== Task : Publish Pipeline Artifacts Description : Publish (upload) a file or directory as a named artifact for the current run Version : 1.199.0 Author : Microsoft Corporation Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/publish-pipeline-artifact ============================================================================== Artifact name input: Test Project Deploy ##[error]Path does not exist: /home/vsts/work/1/s/AzureRepoName Finishing: PublishPipelineArtifact
I want to upload any staged change that is committed to main branch on Azure Devops to be automatically deploy on the remote FTP server
Thanks
-
how to pass parameters to template in chef?
I just started to study chef. These days, I'm testing the sample of templates on -- https://docs.chef.io/resources/template/
But I failed all the times ... Here's my code --
- I created a cookbook named sample, and created a recipe named
default.rb
file '/srv/www/htdocs/index.html' do content 'Hello World!' end include_recipe '::e'
- Then I created another recipe
e.rb
--
default['authorization']['sudo']['groups'] = %w(sysadmin wheel admin) default['authorization']['sudo']['users'] = %w(jerry greg) template '/tmp/test.txt' do source 'test.txt.erb' mode '0440' owner 'root' group 'root' variables( sudoers_groups: node['authorization']['sudo']['groups'], sudoers_users: node['authorization']['sudo']['users'] ) end
- In this cookbook's templates folder, I created a erb file --
test.txt.erb
Defaults !lecture,tty_tickets,!fqdn root ALL=(ALL) ALL <% @sudoers_users.each do |user| -%> <%= user %> ALL=(ALL) <%= "NOPASSWD:" if @passwordless %>ALL <% end -%> %sysadmin ALL=(ALL) <%= "NOPASSWD:" if @passwordless %>ALL <% @sudoers_groups.each do |group| -%> <%= group %> ALL=(ALL) <%= "NOPASSWD:" if @passwordless %>ALL <% end -%>
- Then after kick off "chef-client" , error message shows --
[2022-05-06T18:01:07+08:00] FATAL: NameError: undefined local variable or method `default' for cookbook: sample, recipe: e :Chef::Recipe
- Since error shows can't find the variable named 'default' and in that sample -- it's using
node['authorization']['sudo']['groups']
to pass parameters to sudoers_groups, I think the e.rb maybe should be this --
node['authorization']['sudo']['groups'] = %w(sysadmin wheel admin) node['authorization']['sudo']['users'] = %w(jerry greg) template '/tmp/test.txt' do source 'test.txt.erb' mode '0440' owner 'root' group 'root' variables( sudoers_groups: node['authorization']['sudo']['groups'], sudoers_users: node['authorization']['sudo']['users'] ) end
- But it still fails --
[2022-05-06T17:39:38+08:00] FATAL: NoMethodError: undefined method `[]' for nil:NilClass
I really messed up by this official sample. Please kind help me, Thanks in advance for any ideas.
Regards Eisen
- I created a cookbook named sample, and created a recipe named
-
Nginx chef docker
I am new to docker concept. For an application I need to create nginx container. The nginx configuration is in chef cookbook, hence I have example.conf.erb file and default.rb (containing setup nginx file) in my chef/cookbook/ directory . I am not sure how to containerise. I copied .conf.erb to /etc/nginx/conf.d/example.conf.erb. I am not sure what else to do. I am confused and no resource online need help immediate default.rb :
include_recipe 'nginx_ldap_auth' include_recipe 'nginx' template 'nginx config' do source 'example.conf.erb' path '/etc/nginx/conf.d/example.conf.erb' owner 'root' group 'root' mode '' variables({'environment variables'}) notifies :restart, 'service[nginx])'
My Dockerfile currently look like this:
FROM nginx:alpine COPY default.conf /etc/nginx/example.conf.erb/
I am not sure if I need docker-compose. Apart from Dockerfile there is nothing much I have created. Please guide
-
Can inspec use the output of a command to trigger and only_if skip of a control?
I am trying to set something up like this:
only_if('physical device') do command('hostnamectl') do its('stdout') { should match /Chassis: desktop/ } end end describe command('ifquery --list') do its ('stdout') { should eq "lo\nbond0\neth0\neth1\neth2\n" } end
because I'm hitting up a group of physical and virtual machines using the same control and I want to check and see that all the physical devices have 3 ethernet interfaces and all the virtual devices has 1.
There's only a handful of places you can go in linux to tell if you're running a vm or baremetal, but they almost all require grepping something out of a file, so I need to check
stdout
somehow.So, what I'm wondering is, how do I use stdout with a command to skip something?