Smush Pro and AWS Offload s3 free version

I ran Smush pro yesterday on 1405 images. ALL of them ended up with 0 data. All images sizes ALL ZERO! I had to reload everything and delete Smush off the planet.

This site has been running since 2016 using AWS Offload s3 free version and since Smush Pro actually had a check box for it I decided if finally knew how to handle off server images. Boy was I wrong. Is this a know problem???

  • Lee

    Does Smush Pro try to write directly to S3 from your outside servers or does it write from the server that Smush Pro is running on ? In my case I have the S3 locked down that it will only allow reads and writes from the website server as it serves pages.

    If Smush Pro is trying to write from your servers you should have seen many failures. The only way I could us Smush Pro would be to identify the domain in the security script as I do my webserver. Is it possible to Id the domain of Smush Pro?

  • James Morris

    Hello Lee

    I'm terribly sorry you've had this experience. This is the first instance of something like this that I'm aware of.

    I would like to investigate this issue further; however, our team will need access to an site where we can perform some tests. Would you please create a staging site with the same/similar setup as your production site. Specifically, could you setup another bucket with the same permissions structure and do a data/image mirroring on to the staging site so we can test this out safely.

    Also, our team will need access to the staging site and server to run some advanced debugging.

    Once you have the staging site setup, please visit the Contact page and complete the form with the following information:

    https://premium.wpmudev.org/contact/#i-have-a-different-question

    Option: I have a different Question

    Subject: "Attn: James Morris"

    In the Message box, please provide the following:

    - link back to this thread for reference
    - any other relevant urls

    - Admin login:
    Admin username
    Admin password
    Login url

    - Hosting Control Panel Login
    Admin username
    Admin password
    Login url

    ~OR~

    - FTP credentials
    host
    username
    password
    (and port if required)

    Best regards,

    James Morris

  • Lee

    This has happened on all the sites I tried it on. Nothing special about any of them and on multiple servers. I checked other test sites all using AWS S3 and all of them have ZERO length image files.

    I have never had Smush Pro work since I became a member of this service since I have always used S3 offloading.

    I have DELETED all Smush Pro plugins from all sites.

    Don't expect to use again until I hear that the developers have spent time testing AWS S3 configurations and release an update. You don't need my sites to test. Not spending time on this.

  • James Morris

    Hello Lee

    I'm terribly sorry you've had this experience. Many of our members and staff members have used Smush Pro successfully with Amazon S3 without issue. I have a test site configured this way right now for BETA testing. So, I'm really quite puzzled as to why you are having this experience. If you would not rather provide access to a site test with, I understand. However, if you could provide as much information as possible about your configuration and policies on S3, this will, at the very least, help us to test this issue further to see if there's something we've missed.

    Best regards,

    James Morris

  • Lee

    The Bucket Policy script on S3 that was enabled at the time Smush Pro ran and set NULL to all images. 1st of all smush pro should never run if it can't write to the destination drive and / or certainly stop on the first failure.

    bucket policy:

    {
        "Version": "2008-10-17",
        "Id": "deleted from this example",
        "Statement": [
            {
                "Sid": "Allow get requests referred by domain",
                "Effect": "Deny",
                "Principal": {
                    "AWS": "*"
                },
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::sealifenetbucket/*",
                "Condition": {
                    "StringNotLike": {
                        "aws:Referer": [
                            "https://www.sealife.net/*",
                            "https://sealife.net/*"
                        ]
                    }
                }
            }
        ]
    }
    • James Morris

      Hello Lee

      Really, the best way to verify that this will work for you, in your specific environment, would be to setup a staging site where you can test out all the functionality at your leisure. We do our very best to ensure everything works well upon release, but it's pretty well impossible to cover the (literal) millions of different configuration scenarios out there. So, while it may work perfectly for us in our limited test environments, the only way to be certain it will work for you is to test it on your own environment.

      Best regards,

      James Morris

Thank NAME, for their help.

Let NAME know exactly why they deserved these points.

Gift a custom amount of points.