W3 Total Cache: minified CSS and JS files not found in AWS S3-based CDN

My configuration consists of a Wordpress installation with the W3 Total Cache plugin that is set up to cache resources on a CDN (Amazon S3+Cloudfront).

The HTML resources and images are loaded correctly, but the problem is with minified CSS and JS, as they were renamed by the plugin (e.g. style.css renamed to 18704.css).

The problem might be in how I write the URI in the site's HTML code, where the original style.css is referred to, instead of the generated filename:

<link rel='stylesheet' id='wptheme-style-css'  href='https://<distribution_id>.cloudfront.net/wp-content/themes/wptheme/style.css.gzip?ver=1.0.1' type='text/css' media='all' />

Cloudfront configuration:

  • Two origins added example-com.s3.amazonaws.com and example.com

S3 configuration:

  • Bucket policy:

.

{
 "Version": "2012-10-17",
 "Statement": [
    {
        "Sid": "PublicReadGetObject",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::bucketname/*"
    }
 ]
}
  • The files stored inside S3 bucket have the correct MIME type, although some may be missing CORS policy:

.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

Browser console:

wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
(index):96 Uncaught ReferenceError: jQuery is not defined
    at (index):96
(anonymous) @ (index):96
www.example.com/:199 Uncaught ReferenceError: jQuery is not defined
    at (index):199
(anonymous) @ (index):199
www.example.com/:1353 Uncaught ReferenceError: jQuery is not defined
    at (index):1353
(anonymous) @ (index):1353
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
twitter-widgets.js.gzip Failed to load resource: the server responded with a status of 403 ()
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <
wp-signup.php:1 Uncaught SyntaxError: Unexpected token <

Chrome's network console:

Network console

The actual minified and compressed CSS and JS files in the S3 bucket:

S3 Bucket

  • Setting up SSL termination on elastic beanstalk instance

    Setup

    I have a web application setup and running with no problems on AWS Elastic Beanstalk. The instance uses Tomcat and Apache proxy. I currently have an Elastic Load Balancer sitting in front of the Elastic Beanstalk instance. The Load Balancer performs SSL termination, which was really easy to setup.

    My web application has little traffic so it runs on a small server. My issue is that the load balancer costs more per month than running my web application server! For this reason I want to setup SSL termination on the elastic beanstalk instance, remove the load balancer, and save some money.

    I followed these instructions for Terminating HTTPS on EC2 Instances Running Tomcat. Basically it involves opening port 443 on the instance and writing the server cert and key into files on the server at startup. I also changed elastic beanstalk to be a single instance (no load balancer and auto scaling group).

    I would have then have followed the instructions for Storing Private Keys Securely in Amazon S3, although I didn't get past the first part of 'hardcoding' the key directly into a server file.

    The Problem

    The instructions for Terminating HTTPS on EC2 Instances Running Tomcat suggest the following container instructions are required

    container_commands:
      killhttpd:
        command: "killall httpd"
      waitforhttpddeath:
        command: "sleep 3"
    

    However this command failed for my instance (with no processes to kill with that name). I tried replacing it with

    container_commands:
      killhttpd:
        command: "service httpd stop"
      waitforhttpddeath:
        command: "sleep 3"
    

    But this also failed. My instance still deploys OK and listens on port 80, but https over 443 is not available.

    Has anyone seen this before? Has anyone successfully setup SSL termination on an elastic beanstalk instance? Not many people seem to be trying this.

    Thanks.

  • "ImportError: No module named pyRserve" when running AWS Glue job

    When I run my Glue job in which I try to import the pyRserve python module (pure Python), I'm getting this error:

    LogType:stdout
    Log Upload Time:Sun Jan 21 12:27:32 +0000 2018
    LogLength:206
    Log Contents:
    Traceback (most recent call last):
    File "script_2018-01-21-12-27-05.py", line 8, in <module>
    import pyRserve
    ImportError: No module named pyRserve
    End of LogType:stdout
    

    Here are details about my job:

    $ aws glue get-job --job-name test_trunc
    {
        "Job": {
            "Name": "test_trunc",
            "Role": "arn:aws:iam::#CLIPPED#:role/AWSGlueServiceRoleDefault",
            "CreatedOn": 1516192543.117,
            "LastModifiedOn": 1516537317.889,
            "ExecutionProperty": {
                "MaxConcurrentRuns": 1
            },
            "Command": {
                "Name": "glueetl",
                "ScriptLocation": "s3://#CLIPPED#/gluescripts/test_trunc"
            },
            "DefaultArguments": {
                "--TempDir": "s3://#CLIPPED#/jobs/test_trunc/scripts",
                "--extra-py-files": "s3://#CLIPPED#/jobs/test_trunc/python-libs/pyRserve.zip",
                "--job-bookmark-option": "job-bookmark-disable",
                "--job-language": "python"
            },
            "Connections": {
                "Connections": [
                    "redshift"
                ]
            },
            "MaxRetries": 0,
            "AllocatedCapacity": 10
        }
    }
    

    Here is the script I'm running:

    import sys
    from awsglue.transforms import *
    from awsglue.utils import getResolvedOptions
    from pyspark.context import SparkContext
    from awsglue.context import GlueContext
    from awsglue.job import Job
    import pprint
    import pyRserve
    

    Here is the complete log:

    https://gist.github.com/mattazend/b611d0232d94ade4bc4c16bcb79f73a8

  • AWS cost explored

    I see beautiful graphs in AWS cost explorer, especially "last 6 month costs for all the services". There is option to download the stats as csv file. Is there a way we can send that in the email (as I want to see my cost comparison over email).

    If there is not direct way i could see couple of ways -

    cost explorer SDK - boto3 turn the functionality to send all the billing data to S3 so we can use S3 bucket to retrieve that.

    http://www.silota.com/docs/recipes/sql-athena-for-aws-billing.html Can anyone let me know what is easier way to get this details.

  • Reporting In Excel Using Python / Zeppelin / EMR

    Current process: I am having to write SQL, run it on my teams Redshift cluster, copy/paste the results into Excel, format the Excel table, refresh the Excel depending on other cuts of data the customer wants. Naturally, this is a huge time sink when having to refresh the entire excel file for whatever reason e.g. you spot a bug in the query, the data updates etc. The cluster is a shared resource too so large queries can take a long time to run and therefore block other work.

    Goal: code in Python vs use SQL and copy/paste manually, output in Excel for non-technical stakeholders who want to do further calculations in Excel. Ideally I want the raw data in one tab, and a nice table in another. A bonus would be the nice table has Excel formulas linking the sheets (SUMIFS etc) or some sort of 'cross check' that the numbers are not completely off.

    Output: Example Table

    Partial solution: I have set up data pipelines in S3 so I can 'query' through Zeppelin notebook through EMR by importing the S3 data. I know Zeppelin makes tables / graphs which are useful, but Excel is still a must here.

    Question: I can generate a table within the notebook, but my stakeholders are non-technical, so they refuse to use any dashboard/UI, and want everything in Excel. The only solutions I have seen so far to export the data to Excel only export a 'dump' of data instead of a formatted table (like the one attached) so I still end up doing the boring manual work.

    Is producing something like the table above possible in Excel but using Python so I can re-use code the next time this is requested? The data tables don't change much, its more just various cuts and therefore it would be great to know how to do.

  • CloudFormation template - issue assigning S3 permissions

    Working on a CloudFormation template that consolidates CloudTrail logs from multiple accounts into a single S3 bucket. Attempting to set permissions on an S3 bucket so that each account writes to their own folder. The two last !Join lines are giving me grief.

    MasterAccNum:
    Type: String
    Description: 12 digit account number without dashes.
    
    ProdAccNum:
    Type: String
    Description: 12 digit account number without dashes.
    
    Sid: AWSCloudTrailWrite
          Effect: Allow
          Principal:
            Service: cloudtrail.amazonaws.com
          Action: s3:PutObject
          Resource: 
            - !Join ['', ['arn:aws:s3:::', !Ref 'LoggingS3Bucket', /AWSCloudTrailLogs/, !Ref 'AWS::AccountId', /*]]
            - !Join ['', ['arn:aws:s3:::', !Ref 'LoggingS3Bucket', /AWSCloudTrailLogs/, !Ref 'MasterAccNum', /*]]
            - !Join ['', ['arn:aws:s3:::', !Ref 'LoggingS3Bucket', /AWSCloudTrailLogs/, !Ref 'ProdAccNum', /*]]
    

    I enter 12 digit account numbers for MasterAccNum and ProdAccNum parameters and end up with S3 permissions looking like this:

    "Resource": [
                "arn:aws:s3:::mycompanyname-is-cloudtrail-logs/AWSCloudTrailLogs/012345678901/*",
                "arn:aws:s3:::mycompanyname-is-cloudtrail-logs/AWSCloudTrailLogs//*",
                "arn:aws:s3:::mycompanyname-is-cloudtrail-logs/AWSCloudTrailLogs//*"
    

    The last two lines are missing parameters entered during stack launch

  • AWS Failed to execute 'setRequestHeader' on 'XMLHttpRequest' with Evaporate.js

    I'm using the Evaporate (https://github.com/TTLabs/EvaporateJS) to upload my files to S3. The app is in Vue/Nuxt.js setup, and this is my config:

    const uploadApiConfig = {
        signerUrl: '/api/sign_auth',
        awsRegion: process.env.awsRegion,
        aws_key: process.env.awsKey,
        bucket: process.env.awsBucket,
        computeContentMd5: true,
        awsSignatureVersion: '4',
        cryptoMd5Method: (data) => {
            return AWS.util.crypto.md5(data, 'base64')
        },
        cryptoHexEncodedHash256: (data) => {
            return AWS.util.crypto.sha256(data, 'hex')
        },
        signHeaders: {
            'authorization': `Bearer ${token}`
        }
    }
    

    but still get this error Failed to execute 'setRequestHeader' on 'XMLHttpRequest': 'AWS4-HMAC-SHA256 Credential=.../s3/aws4_request, SignedHeaders=host;x-amz-date Anyone can help?

  • Can two wildcard aliases overlap for two different cloudfront distributions owned by the same account?

    I would like one CloudFront distribution with the alias *.example.com, and another one with the alias *.subdomain.example.com. I am having trouble adding the *.example.com alias to a second distribution though, getting the error, 409 CNAMEAlreadyExists, that it's in use. This is all in the same account.

    Per AWS docs I see that:

    A wildcard alternate domain name, such as *.example.com, can include another alternate domain name, such as example.com, as long as they're both in the same CloudFront distribution or they're in distributions that were created by using the same AWS account.

    https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html

    But can it overlap with another wildcard CNAME like *.subdomain.example.com?

  • Rails not sending cookies to subdomains

    I don't see a set cookie header for requests to my subdomain, even though I used domain: :all when I set it. I set up a working test subdomain route http://blog.local.test:3000/blogs and made an xhr request to this route but in the chrome console network tab I see no cookie in the request. On my other urls (localhost/*) I see the cookie as usual.

    On localhost, in the application tab, the cookie domain is just localhost. But when I deploy the domain is correctly .example.com due to the domain: :all option. However when I send requests to media.example.com (I CNAMEd this subdomain to cloudfront so its not part of my application), there is no cookie header in the request. The code to set the cookie:

    cookies['CloudFront-Policy'] = {
        value: cloudfront_cookies['CloudFront-Policy'],
        domain: :all,
        expires: 1.days.from_now
      }
    

    I also tried explicitly setting the domain for the cookie, but there was no difference, I think the domain: :all option does this automatically. How can I send cookie to subdomains?

  • Redirecting specific URLs to AWS Cloudfront

    So I have a Web App running in AWS on EC2 and I have an AWS Cloudfront distribution which serves up some custom content from an S3 bucket when the EC2 site is down.

    I now have a requirement to serve up some files from the Web App that are stored in S3. How would I go about sending request made to https://example.com/files/* to the Cloudfront distribution? Route 53?

  • SSL Caching prevends https requests

    I use the W3 Total Cache Plugin for Wordpress.

    If I enabled "Cache SSL (https) requests" in W3 Total Cache and testing my site at https://securityheaders.io for security reports, then I get only one time a positive response.

    Anyone know I can fix that?

    The problem is when I disabled the SSL Caching the 304 header (Last-Modified) don´t works anymore.

    If I clear the cache then it works again and the response is positive. But i can´t clear the cache every time after an request for every user.

    Or I have a wrong thought ?

  • WordPress Site getting 'uncaught syntaxerror unexpected token <'

    The site was working fine with Akamai + W3 total cache. Then the next day site loads without js + css. I'm quite puzzled. The site shows properly in wp dashboard. Not even sure how to go about solving this issue. (I'm a ui/ux designer that bit more than I can chew) Any help is appreciated!

    http://wpsojo.azurewebsites.net

  • W3 total cache not minifying certain js file

    Do you have any idea why W3 Total Cache is not minifying this one javascript file? https://www.gadrilling.com/wp-content/cache/minify/64ba3.js

    The rest of JS files are completely fine and minified, so it shouldnt be the plugin settings.

    This specific file is quite large, maybe some max file size limitation from the plugin?