Rails Upload File to Google Cloud Store
Active Storage Overview
This guide covers how to attach files to your Active Record models.
Later reading this guide, y'all will know:
- How to attach one or many files to a record.
- How to delete an attached file.
- How to link to an fastened file.
- How to use variants to transform images.
- How to generate an image representation of a non-image file, such as a PDF or a video.
- How to send file uploads directly from browsers to a storage service, bypassing your awarding servers.
- How to clean up files stored during testing.
- How to implement support for boosted storage services.
Chapters
- What is Agile Storage?
- Requirements
- Setup
- Disk Service
- S3 Service (Amazon S3 and S3-compatible APIs)
- Microsoft Azure Storage Service
- Google Deject Storage Service
- Mirror Service
- Public admission
- Attaching Files to Records
-
has_one_attached
-
has_many_attached
- Attaching File/IO Objects
-
- Removing Files
- Serving Files
- Redirect mode
- Proxy mode
- Authenticated Controllers
- Downloading Files
- Analyzing Files
- Displaying Images, Videos, and PDFs
- Lazy vs Firsthand Loading
- Transforming Images
- Previewing Files
- Direct Uploads
- Usage
- Cross-Origin Resource Sharing (CORS) configuration
- Direct upload JavaScript events
- Case
- Integrating with Libraries or Frameworks
- Testing
- Discarding files created during tests
- Calculation attachments to fixtures
- Implementing Back up for Other Cloud Services
- Purging Unattached Uploads
1 What is Agile Storage?
Active Storage facilitates uploading files to a cloud storage service like Amazon S3, Google Cloud Storage, or Microsoft Azure Storage and attaching those files to Active Record objects. It comes with a local deejay-based service for development and testing and supports mirroring files to subordinate services for backups and migrations.
Using Active Storage, an application can transform image uploads or generate image representations of not-paradigm uploads like PDFs and videos, and extract metadata from arbitrary files.
1.1 Requirements
Diverse features of Active Storage depend on third-party software which Rails volition non install, and must be installed separately:
- libvips v8.six+ or ImageMagick for paradigm assay and transformations
- ffmpeg v3.4+ for video previews and ffprobe for video/audio assay
- poppler or muPDF for PDF previews
Image analysis and transformations also require the image_processing
gem. Uncomment information technology in your Gemfile
, or add information technology if necessary:
jewel "image_processing" , ">= 1.two"
Compared to libvips, ImageMagick is better known and more than widely bachelor. Withal, libvips can be up to 10x faster and consume 1/10 the retentivity. For JPEG files, this can exist further improved by replacing libjpeg-dev
with libjpeg-turbo-dev
, which is 2-7x faster.
Before you install and use third-party software, make sure yous sympathise the licensing implications of doing then. MuPDF, in particular, is licensed under AGPL and requires a commercial license for some use.
two Setup
Active Storage uses 3 tables in your awarding's database named active_storage_blobs
, active_storage_variant_records
and active_storage_attachments
. After creating a new application (or upgrading your application to Rails 5.2), run bin/track active_storage:install
to generate a migration that creates these tables. Utilise bin/rails db:drift
to run the migration.
active_storage_attachments
is a polymorphic join table that stores your model'southward class proper name. If your model'due south class proper name changes, you will need to run a migration on this table to update the underlying record_type
to your model'south new class name.
If you are using UUIDs instead of integers as the primary primal on your models you volition need to modify the column type of active_storage_attachments.record_id
and active_storage_variant_records.id
in the generated migration appropriately.
Declare Active Storage services in config/storage.yml
. For each service your application uses, provide a name and the requisite configuration. The case below declares 3 services named local
, test
, and amazon
:
local : service : Deejay root : <%= Rails.root.join("storage") %> examination : service : Disk root : <%= Rails.root.bring together("tmp/storage") %> amazon : service : S3 access_key_id : " " secret_access_key : " " bucket : " " region : " " # e.g. 'us-east-one'
Tell Active Storage which service to utilize past setting Rails.application.config.active_storage.service
. Because each environment will likely use a different service, it is recommended to practise this on a per-environs basis. To employ the disk service from the previous example in the development environment, you lot would add together the following to config/environments/development.rb
:
# Store files locally. config . active_storage . service = :local
To utilise the S3 service in production, y'all add together the following to config/environments/production.rb
:
# Store files on Amazon S3. config . active_storage . service = :amazon
To use the test service when testing, you add the following to config/environments/test.rb
:
# Store uploaded files on the local file system in a temporary directory. config . active_storage . service = :test
Keep reading for more information on the congenital-in service adapters (e.g. Deejay
and S3
) and the configuration they require.
Configuration files that are environment-specific volition accept precedence: in production, for example, the config/storage/production.yml
file (if real) volition take precedence over the config/storage.yml
file.
It is recommended to utilise Runway.env
in the bucket names to further reduce the run a risk of accidentally destroying production information.
amazon : service : S3 # ... bucket : your_own_bucket-<%= Rails.env %> google : service : GCS # ... bucket : your_own_bucket-<%= Rails.env %> azure : service : AzureStorage # ... container : your_container_name-<%= Rail.env %>
2.1 Disk Service
Declare a Disk service in config/storage.yml
:
local : service : Disk root : <%= Rail.root.join("storage") %>
2.2 S3 Service (Amazon S3 and S3-compatible APIs)
To connect to Amazon S3, declare an S3 service in config/storage.yml
:
amazon : service : S3 access_key_id : " " secret_access_key : " " region : " " bucket : " "
Optionally provide client and upload options:
amazon : service : S3 access_key_id : " " secret_access_key : " " region : " " saucepan : " " http_open_timeout : 0 http_read_timeout : 0 retry_limit : 0 upload : server_side_encryption : " " # 'aws:kms' or 'AES256'
Fix sensible client HTTP timeouts and retry limits for your awarding. In certain failure scenarios, the default AWS client configuration may cause connections to be held for up to several minutes and lead to request queuing.
Add the aws-sdk-s3
gem to your Gemfile
:
gem "aws-sdk-s3" , crave: false
The core features of Active Storage crave the following permissions: s3:ListBucket
, s3:PutObject
, s3:GetObject
, and s3:DeleteObject
. Public access additionally requires s3:PutObjectAcl
. If you lot accept additional upload options configured such equally setting ACLs and so additional permissions may be required.
If you want to use environment variables, standard SDK configuration files, profiles, IAM example profiles or job roles, you can omit the access_key_id
, secret_access_key
, and region
keys in the case in a higher place. The S3 Service supports all of the authentication options described in the AWS SDK documentation.
To connect to an S3-uniform object storage API such as DigitalOcean Spaces, provide the endpoint
:
digitalocean : service : S3 endpoint : https://nyc3.digitaloceanspaces.com access_key_id : ... secret_access_key : ... # ...and other options
There are many other options available. Y'all tin can cheque them in AWS S3 Client documentation.
2.3 Microsoft Azure Storage Service
Declare an Azure Storage service in config/storage.yml
:
azure : service : AzureStorage storage_account_name : " " storage_access_key : " " container : " "
Add the azure-storage-blob
jewel to your Gemfile
:
gem "azure-storage-blob" , crave: false
2.4 Google Cloud Storage Service
Declare a Google Cloud Storage service in config/storage.yml
:
google : service : GCS credentials : <%= Rails.root.join("path/to/keyfile.json") %> project : " " bucket : " "
Optionally provide a Hash of credentials instead of a keyfile path:
google : service : GCS credentials : type : " service_account" project_id : " " private_key_id : <%= Track.application.credentials.dig(:gcs, :private_key_id) %> private_key : <%= Rail.application.credentials.dig(:gcs, :private_key).dump %> client_email : " " client_id : " " auth_uri : " https://accounts.google.com/o/oauth2/auth" token_uri : " https://accounts.google.com/o/oauth2/token" auth_provider_x509_cert_url : " https://www.googleapis.com/oauth2/v1/certs" client_x509_cert_url : " " projection : " " saucepan : " "
Optionally provide a Cache-Control metadata to set on uploaded assets:
google : service : GCS ... cache_control : " public, max-historic period=3600"
Optionally employ IAM instead of the credentials
when signing URLs. This is useful if you lot are authenticating your GKE applications with Workload Identity, see this Google Cloud web log post for more than information.
google : service : GCS ... iam : true
Optionally apply a specific GSA when signing URLs. When using IAM, the metadata server will exist contacted to go the GSA email, but this metadata server is not always nowadays (e.g. local tests) and y'all may wish to use a non-default GSA.
google : service : GCS ... iam : true gsa_email : " foobar@baz.iam.gserviceaccount.com"
Add the google-cloud-storage
gem to your Gemfile
:
gem "google-deject-storage" , "~> 1.11" , require: faux
ii.v Mirror Service
You can keep multiple services in sync by defining a mirror service. A mirror service replicates uploads and deletes across two or more subordinate services.
A mirror service is intended to exist used temporarily during a migration between services in production. You tin can kickoff mirroring to a new service, re-create pre-existing files from the sometime service to the new, then go all-in on the new service.
Mirroring is non atomic. It is possible for an upload to succeed on the primary service and fail on any of the subordinate services. Before going all-in on a new service, verify that all files have been copied.
Define each of the services you'd like to mirror as described above. Reference them by proper name when defining a mirror service:
s3_west_coast : service : S3 access_key_id : " " secret_access_key : " " region : " " bucket : " " s3_east_coast : service : S3 access_key_id : " " secret_access_key : " " region : " " bucket : " " production : service : Mirror principal : s3_east_coast mirrors : - s3_west_coast
Although all secondary services receive uploads, downloads are always handled by the master service.
Mirror services are compatible with direct uploads. New files are directly uploaded to the chief service. When a directly-uploaded file is attached to a record, a groundwork task is enqueued to copy it to the secondary services.
2.vi Public access
By default, Active Storage assumes individual access to services. This means generating signed, single-use URLs for blobs. If you'd rather make blobs publicly accessible, specify public: true
in your app'southward config/storage.yml
:
gcs : &gcs service : GCS project : " " private_gcs : << : *gcs credentials : <%= Rails.root.bring together("path/to/private_keyfile.json") %> saucepan : " " public_gcs : << : *gcs credentials : <%= Rails.root.bring together("path/to/public_keyfile.json") %> bucket : " " public : truthful
Make sure your buckets are properly configured for public access. See docs on how to enable public read permissions for Amazon S3, Google Cloud Storage, and Microsoft Azure storage services. Amazon S3 additionally requires that y'all have the s3:PutObjectAcl
permission.
When converting an existing application to use public: true
, make sure to update every private file in the saucepan to be publicly-readable before switching over.
three Attaching Files to Records
3.ane has_one_attached
The has_one_attached
macro sets upward a one-to-i mapping between records and files. Each record can accept one file fastened to it.
For example, suppose your application has a User
model. If you want each user to have an avatar, ascertain the User
model as follows:
class User < ApplicationRecord has_one_attached :avatar end
or if you lot are using Track 6.0+, yous can run a model generator command like this:
bin / rails generate model User avatar :zipper
You tin create a user with an avatar:
<%= form . file_field :avatar %>
grade SignupController < ApplicationController def create user = User . create! ( user_params ) session [ :user_id ] = user . id redirect_to root_path end individual def user_params params . require ( :user ). permit ( :email_address , :countersign , :avatar ) terminate terminate
Call avatar.adhere
to attach an avatar to an existing user:
user . avatar . attach ( params [ :avatar ])
Call avatar.attached?
to determine whether a item user has an avatar:
In some cases you might want to override a default service for a specific zipper. You can configure specific services per attachment using the service
option:
class User < ApplicationRecord has_one_attached :avatar , service: :s3 end
You tin configure specific variants per attachment by calling the variant
method on yielded attachable object:
grade User < ApplicationRecord has_one_attached :avatar practice | attachable | attachable . variant :thumb , resize_to_limit: [ 100 , 100 ] stop end
Call avatar.variant(:thumb)
to become a thumb variant of an avatar:
<%= image_tag user . avatar . variant ( :thumb ) %>
3.2 has_many_attached
The has_many_attached
macro sets up a one-to-many relationship between records and files. Each tape can have many files attached to information technology.
For instance, suppose your application has a Message
model. If you desire each bulletin to have many images, define the Message
model as follows:
class Message < ApplicationRecord has_many_attached :images end
or if you are using Rails six.0+, you can run a model generator control like this:
bin / rails generate model Bulletin images :attachments
You tin create a bulletin with images:
class MessagesController < ApplicationController def create message = Message . create! ( message_params ) redirect_to message end individual def message_params params . require ( :message ). permit ( :title , :content , images: []) end end
Phone call images.attach
to add new images to an existing bulletin:
@message . images . adhere ( params [ :images ])
Call images.attached?
to determine whether a item message has any images:
@message . images . attached?
Overriding the default service is done the aforementioned way every bit has_one_attached
, past using the service
selection:
grade Message < ApplicationRecord has_many_attached :images , service: :s3 stop
Configuring specific variants is done the same style equally has_one_attached
, by calling the variant
method on the yielded attachable object:
class Message < ApplicationRecord has_many_attached :images do | attachable | attachable . variant :thumb , resize_to_limit: [ 100 , 100 ] end end
iii.three Attaching File/IO Objects
Sometimes y'all need to attach a file that doesn't get in via an HTTP request. For example, you may want to attach a file you generated on deejay or downloaded from a user-submitted URL. You may also want to attach a fixture file in a model test. To do that, provide a Hash containing at least an open IO object and a filename:
@message . images . attach ( io: File . open ( '/path/to/file' ), filename: 'file.pdf' )
When possible, provide a content blazon besides. Active Storage attempts to determine a file's content type from its data. It falls back to the content type you provide if information technology tin't do that.
@message . images . adhere ( io: File . open up ( '/path/to/file' ), filename: 'file.pdf' , content_type: 'awarding/pdf' )
You can bypass the content type inference from the data by passing in identify: faux
along with the content_type
.
@message . images . attach ( io: File . open ( '/path/to/file' ), filename: 'file.pdf' , content_type: 'application/pdf' , identify: imitation )
If you lot don't provide a content type and Active Storage can't make up one's mind the file's content blazon automatically, information technology defaults to awarding/octet-stream.
4 Removing Files
To remove an zipper from a model, call purge
on the attachment. If your awarding is ready to use Active Chore, removal tin be done in the background instead by calling purge_later
. Purging deletes the hulk and the file from the storage service.
# Synchronously destroy the avatar and bodily resources files. user . avatar . purge # Destroy the associated models and bodily resource files async, via Active Job. user . avatar . purge_later
five Serving Files
Active Storage supports two ways to serve files: redirecting and proxying.
All Active Storage controllers are publicly accessible past default. The generated URLs are hard to guess, but permanent by design. If your files crave a higher level of protection consider implementing Authenticated Controllers.
5.i Redirect style
To generate a permanent URL for a blob, you can laissez passer the blob to the url_for
view helper. This generates a URL with the hulk's signed_id
that is routed to the blob's RedirectController
url_for ( user . avatar ) # => /runway/active_storage/blobs/:signed_id/my-avatar.png
The RedirectController
redirects to the actual service endpoint. This indirection decouples the service URL from the actual one, and allows, for example, mirroring attachments in different services for high-availability. The redirection has an HTTP expiration of 5 minutes.
To create a download link, use the rails_blob_{path|url}
helper. Using this helper allows you to set the disposition.
rails_blob_path ( user . avatar , disposition: "attachment" )
To foreclose XSS attacks, Active Storage forces the Content-Disposition header to "attachment" for some kind of files. To modify this behaviour see the available configuration options in Configuring Track Applications.
If you need to create a link from outside of controller/view context (Groundwork jobs, Cronjobs, etc.), you tin access the rails_blob_path
similar this:
Rails . application . routes . url_helpers . rails_blob_path ( user . avatar , only_path: true )
v.two Proxy mode
Optionally, files can be proxied instead. This means that your application servers will download file data from the storage service in response to requests. This can be useful for serving files from a CDN.
You tin configure Agile Storage to utilise proxying past default:
# config/initializers/active_storage.rb Rails . application . config . active_storage . resolve_model_to_route = :rails_storage_proxy
Or if you desire to explicitly proxy specific attachments there are URL helpers you tin can use in the form of rails_storage_proxy_path
and rails_storage_proxy_url
.
<%= image_tag rails_storage_proxy_path ( @user . avatar ) %>
v.2.one Putting a CDN in front end of Agile Storage
Additionally, in order to utilise a CDN for Active Storage attachments, you will need to generate URLs with proxy style so that they are served past your app and the CDN will cache the attachment without any extra configuration. This works out of the box because the default Agile Storage proxy controller sets an HTTP header indicating to the CDN to cache the response.
You lot should too brand sure that the generated URLs use the CDN host instead of your app host. There are multiple ways to achieve this, just in general it involves tweaking your config/routes.rb
file and then that y'all can generate the proper URLs for the attachments and their variations. Every bit an case, you could add this:
# config/routes.rb directly :cdn_image practice | model , options | expires_in = options . delete ( :expires_in ) { ActiveStorage . urls_expire_in } if model . respond_to? ( :signed_id ) route_for ( :rails_service_blob_proxy , model . signed_id ( expires_in: expires_in ), model . filename , options . merge ( host: ENV [ 'CDN_HOST' ]) ) else signed_blob_id = model . hulk . signed_id ( expires_in: expires_in ) variation_key = model . variation . key filename = model . hulk . filename route_for ( :rails_blob_representation_proxy , signed_blob_id , variation_key , filename , options . merge ( host: ENV [ 'CDN_HOST' ]) ) end end
and then generate routes like this:
<%= cdn_image_url ( user . avatar . variant ( resize_to_limit: [ 128 , 128 ])) %>
5.3 Authenticated Controllers
All Active Storage controllers are publicly accessible by default. The generated URLs use a plain signed_id
, making them difficult to estimate but permanent. Anyone that knows the hulk URL will be able to access it, even if a before_action
in your ApplicationController
would otherwise require a login. If your files crave a higher level of protection, you lot can implement your own authenticated controllers, based on the ActiveStorage::Blobs::RedirectController
, ActiveStorage::Blobs::ProxyController
, ActiveStorage::Representations::RedirectController
and ActiveStorage::Representations::ProxyController
To merely permit an account to access their own logo you could practise the post-obit:
# config/routes.rb resource :account do resources :logo end
# app/controllers/logos_controller.rb grade LogosController < ApplicationController # Through ApplicationController: # include Cosign, SetCurrentAccount def show redirect_to Current . account . logo . url finish end
<%= image_tag account_logo_path %>
And and so you might want to disable the Active Storage default routes with:
config . active_storage . draw_routes = simulated
to preclude files being accessed with the publicly accessible URLs.
6 Downloading Files
Sometimes you need to process a hulk afterward it's uploaded—for example, to convert it to a different format. Use the attachment'due south download
method to read a blob'south binary information into memory:
binary = user . avatar . download
Y'all might want to download a blob to a file on disk so an external program (e.one thousand. a virus scanner or media transcoder) can operate on information technology. Use the zipper's open
method to download a blob to a tempfile on disk:
message . video . open practice | file | arrangement '/path/to/virus/scanner' , file . path # ... end
It's important to know that the file is non nevertheless bachelor in the after_create
callback but in the after_create_commit
only.
seven Analyzing Files
Agile Storage analyzes files once they've been uploaded past queuing a job in Active Chore. Analyzed files will store additional information in the metadata hash, including analyzed: true
. Yous can check whether a blob has been analyzed by calling analyzed?
on it.
Image analysis provides width
and tiptop
attributes. Video analysis provides these, as well as duration
, angle
, display_aspect_ratio
, and video
and audio
booleans to indicate the presence of those channels. Audio assay provides duration
and bit_rate
attributes.
8 Displaying Images, Videos, and PDFs
Active Storage supports representing a variety of files. You can call representation
on an attachment to display an image variant, or a preview of a video or PDF. Before calling representation
, check if the attachment tin be represented by calling representable?
. Some file formats can't be previewed by Active Storage out of the box (e.1000. Word documents); if representable?
returns false you may want to link to the file instead.
<ul> <% @bulletin . files . each do | file | %> <li> <% if file . representable? %> <%= image_tag file . representation ( resize_to_limit: [ 100 , 100 ]) %> <% else %> <%= link_to rails_blob_path ( file , disposition: "attachment" ) practice %> <%= image_tag "placeholder.png" , alt: "Download file" %> <% terminate %> <% stop %> </li> <% end %> </ul>
Internally, representation
calls variant
for images, and preview
for previewable files. You can also call these methods directly.
8.ane Lazy vs Immediate Loading
By default, Active Storage will process representations lazily. This code:
image_tag file . representation ( resize_to_limit: [ 100 , 100 ])
Will generate an <img>
tag with the src
pointing to the ActiveStorage::Representations::RedirectController
. The browser will make a request to that controller, which will render a 302
redirect to the file on the remote service (or in proxy style, return the file contents). Loading the file lazily allows features like single employ URLs to work without slowing down your initial page loads.
This works fine for most cases.
If you want to generate URLs for images immediately, yous can call .processed.url
:
image_tag file . representation ( resize_to_limit: [ 100 , 100 ]). candy . url
The Active Storage variant tracker improves operation of this, by storing a record in the database if the requested representation has been candy before. Thus, the above code will only make an API call to the remote service (e.chiliad. S3) in one case, and once a variant is stored, will use that. The variant tracker runs automatically, but tin be disabled through config.active_storage.track_variants
.
If yous're rendering lots of images on a folio, the above example could consequence in N+1 queries loading all the variant records. To avert these N+i queries, utilize the named scopes on ActiveStorage::Zipper
.
message . images . with_all_variant_records . each practice | file | image_tag file . representation ( resize_to_limit: [ 100 , 100 ]). processed . url end
8.2 Transforming Images
Transforming images allows yous to display the epitome at your choice of dimensions. To create a variation of an image, call variant
on the attachment. You tin pass any transformation supported past the variant processor to the method. When the browser hits the variant URL, Active Storage will lazily transform the original blob into the specified format and redirect to its new service location.
<%= image_tag user . avatar . variant ( resize_to_limit: [ 100 , 100 ]) %>
If a variant is requested, Agile Storage will automatically use transformations depending on the paradigm's format:
-
Content types that are variable (as dictated by
config.active_storage.variable_content_types
) and not considered web images (as dictated byconfig.active_storage.web_image_content_types
), will exist converted to PNG. -
If
quality
is not specified, the variant processor'due south default quality for the format volition exist used.
Agile Storage tin use either Vips or MiniMagick as the variant processor. The default depends on your config.load_defaults
target version, and the processor can be changed by setting config.active_storage.variant_processor
.
The ii processors are not fully compatible, so when migrating an existing application betwixt MiniMagick and Vips, some changes have to exist fabricated if using options that are format specific:
<!-- MiniMagick --> <%= image_tag user . avatar . variant ( resize_to_limit: [ 100 , 100 ], format: :jpeg , sampling_factor: "four:2:0" , strip: true , interlace: "JPEG" , colorspace: "sRGB" , quality: 80 ) %> <!-- Vips --> <%= image_tag user . avatar . variant ( resize_to_limit: [ 100 , 100 ], format: :jpeg , saver: { subsample_mode: "on" , strip: true , interlace: truthful , quality: lxxx }) %>
8.iii Previewing Files
Some non-image files can be previewed: that is, they can be presented as images. For example, a video file can be previewed past extracting its first frame. Out of the box, Active Storage supports previewing videos and PDF documents. To create a link to a lazily-generated preview, utilize the attachment's preview
method:
<%= image_tag bulletin . video . preview ( resize_to_limit: [ 100 , 100 ]) %>
To add back up for another format, add your own previewer. See the ActiveStorage::Preview
documentation for more information.
nine Direct Uploads
Active Storage, with its included JavaScript library, supports uploading directly from the customer to the cloud.
nine.ane Usage
-
Include
activestorage.js
in your application's JavaScript bundle.Using the asset pipeline:
//= require activestorage
Using the npm package:
import * as ActiveStorage from " @rails/activestorage " ActiveStorage . start ()
-
Add
direct_upload: truthful
to your file field:<%= grade . file_field :attachments , multiple: true , direct_upload: true %>
Or, if you aren't using a
FormBuilder
, add together the data attribute directly:<input blazon= file data-straight-upload-url= " <%= rails_direct_uploads_url %> " />
-
Configure CORS on third-party storage services to allow straight upload requests.
-
That'southward it! Uploads brainstorm upon form submission.
9.2 Cross-Origin Resource Sharing (CORS) configuration
To make direct uploads to a tertiary-political party service piece of work, you'll need to configure the service to allow cross-origin requests from your app. Consult the CORS documentation for your service:
- S3
- Google Cloud Storage
- Azure Storage
Take intendance to allow:
- All origins from which your app is accessed
- The
PUT
request method - The following headers:
-
Origin
-
Content-Type
-
Content-MD5
-
Content-Disposition
(except for Azure Storage) -
x-ms-blob-content-disposition
(for Azure Storage simply) -
x-ms-blob-blazon
(for Azure Storage only) -
Cache-Command
(for GCS, only ifcache_control
is fix)
-
No CORS configuration is required for the Disk service since it shares your app's origin.
9.2.ane Case: S3 CORS configuration
[ { "AllowedHeaders" : [ "*" ], "AllowedMethods" : [ "PUT" ], "AllowedOrigins" : [ "https://www.example.com" ], "ExposeHeaders" : [ "Origin" , "Content-Type" , "Content-MD5" , "Content-Disposition" ], "MaxAgeSeconds" : 3600 } ]
9.two.2 Case: Google Deject Storage CORS configuration
[ { "origin" : [ "https://www.example.com" ], "method" : [ "PUT" ], "responseHeader" : [ "Origin" , "Content-Type" , "Content-MD5" , "Content-Disposition" ], "maxAgeSeconds" : 3600 } ]
9.2.iii Case: Azure Storage CORS configuration
<Cors> <CorsRule> <AllowedOrigins>https://www.example.com</AllowedOrigins> <AllowedMethods>PUT</AllowedMethods> <AllowedHeaders>Origin, Content-Type, Content-MD5, 10-ms-blob-content-disposition, x-ms-blob-blazon</AllowedHeaders> <MaxAgeInSeconds>3600</MaxAgeInSeconds> </CorsRule> </Cors>
nine.3 Direct upload JavaScript events
Event name | Event target | Event data (event.detail ) | Description |
---|---|---|---|
direct-uploads:start | <grade> | None | A grade containing files for direct upload fields was submitted. |
direct-upload:initialize | <input> | {id, file} | Dispatched for every file after form submission. |
direct-upload:offset | <input> | {id, file} | A direct upload is starting. |
straight-upload:before-blob-asking | <input> | {id, file, xhr} | Before making a request to your application for direct upload metadata. |
direct-upload:before-storage-request | <input> | {id, file, xhr} | Before making a asking to store a file. |
directly-upload:progress | <input> | {id, file, progress} | As requests to store files progress. |
directly-upload:fault | <input> | {id, file, error} | An error occurred. An alert will brandish unless this event is canceled. |
direct-upload:finish | <input> | {id, file} | A directly upload has ended. |
direct-uploads:terminate | <form> | None | All direct uploads have ended. |
9.4 Example
You tin can use these events to show the progress of an upload.
To show the uploaded files in a form:
// direct_uploads.js addEventListener ( " directly-upload:initialize " , event => { const { target , item } = event const { id , file } = detail target . insertAdjacentHTML ( " beforebegin " , ` <div id="direct-upload- ${ id } " class="direct-upload direct-upload--awaiting"> <div id="straight-upload-progress- ${ id } " class="straight-upload__progress" style="width: 0%"></div> <bridge grade="direct-upload__filename"></span> </div> ` ) target . previousElementSibling . querySelector ( `.direct-upload__filename` ). textContent = file . proper name }) addEventListener ( " direct-upload:offset " , result => { const { id } = result . detail const chemical element = certificate . getElementById ( `direct-upload- ${ id } ` ) element . classList . remove ( " directly-upload--pending " ) }) addEventListener ( " directly-upload:progress " , issue => { const { id , progress } = issue . detail const progressElement = document . getElementById ( `directly-upload-progress- ${ id } ` ) progressElement . fashion . width = ` ${ progress } %` }) addEventListener ( " direct-upload:fault " , outcome => { issue . preventDefault () const { id , error } = event . particular const element = document . getElementById ( `direct-upload- ${ id } ` ) chemical element . classList . add ( " direct-upload--mistake " ) element . setAttribute ( " championship " , fault ) }) addEventListener ( " direct-upload:end " , effect => { const { id } = event . item const element = document . getElementById ( `direct-upload- ${ id } ` ) chemical element . classList . add together ( " straight-upload--complete " ) })
Add styles:
/* direct_uploads.css */ .direct-upload { display : inline-block ; position : relative ; padding : 2px 4px ; margin : 0 3px 3px 0 ; edge : 1px solid rgba ( 0 , 0 , 0 , 0.iii ); border-radius : 3px ; font-size : 11px ; line-height : 13px ; } .straight-upload--pending { opacity : 0.6 ; } .directly-upload__progress { position : accented ; peak : 0 ; left : 0 ; bottom : 0 ; opacity : 0.ii ; groundwork : #0076ff ; transition : width 120ms ease-out , opacity 60ms 60ms ease-in ; transform : translate3d ( 0 , 0 , 0 ); } .direct-upload--consummate .direct-upload__progress { opacity : 0.4 ; } .direct-upload--error { border-color : reddish ; } input [ type = file ][ data-direct-upload-url ][ disabled ] { display : none ; }
nine.five Integrating with Libraries or Frameworks
If you want to use the Direct Upload feature from a JavaScript framework, or you desire to integrate custom drag and drop solutions, you can utilise the DirectUpload
class for this purpose. Upon receiving a file from your library of selection, instantiate a DirectUpload and phone call its create method. Create takes a callback to invoke when the upload completes.
import { DirectUpload } from " @rail/activestorage " const input = certificate . querySelector ( ' input[type=file] ' ) // Demark to file drop - employ the ondrop on a parent chemical element or utilize a // library like Dropzone const onDrop = ( event ) => { event . preventDefault () const files = effect . dataTransfer . files ; Assortment . from ( files ). forEach ( file => uploadFile ( file )) } // Demark to normal file selection input . addEventListener ( ' alter ' , ( event ) => { Array . from ( input . files ). forEach ( file => uploadFile ( file )) // yous might clear the selected files from the input input . value = null }) const uploadFile = ( file ) => { // your form needs the file_field direct_upload: truthful, which // provides data-direct-upload-url const url = input . dataset . directUploadUrl const upload = new DirectUpload ( file , url ) upload . create (( error , blob ) => { if ( error ) { // Handle the error } else { // Add an accordingly-named hidden input to the form with a // value of hulk.signed_id then that the blob ids will be // transmitted in the normal upload flow const hiddenField = document . createElement ( ' input ' ) hiddenField . setAttribute ( " blazon " , " hidden " ); hiddenField . setAttribute ( " value " , blob . signed_id ); hiddenField . name = input . name document . querySelector ( ' class ' ). appendChild ( hiddenField ) } }) }
If you need to rail the progress of the file upload, you tin pass a tertiary parameter to the DirectUpload
constructor. During the upload, DirectUpload will call the object's directUploadWillStoreFileWithXHR
method. Yous can then bind your ain progress handler on the XHR.
import { DirectUpload } from " @rails/activestorage " course Uploader { constructor ( file , url ) { this . upload = new DirectUpload ( this . file , this . url , this ) } upload ( file ) { this . upload . create (( error , hulk ) => { if ( error ) { // Handle the fault } else { // Add an accordingly-named hidden input to the form // with a value of blob.signed_id } }) } directUploadWillStoreFileWithXHR ( request ) { request . upload . addEventListener ( " progress " , event => this . directUploadDidProgress ( upshot )) } directUploadDidProgress ( effect ) { // Use consequence.loaded and issue.total to update the progress bar } }
Using Straight Uploads can sometimes result in a file that uploads, just never attaches to a record. Consider purging unattached uploads.
10 Testing
Use fixture_file_upload
to test uploading a file in an integration or controller test. Rails handles files like any other parameter.
class SignupController < ActionDispatch :: IntegrationTest test "can sign upwardly" exercise post signup_path , params: { name: "David" , avatar: fixture_file_upload ( "david.png" , "image/png" ) } user = User . guild ( :created_at ). last affirm user . avatar . fastened? end end
10.1 Discarding files created during tests
ten.1.1 System tests
Organisation tests clean upwardly test information past rolling back a transaction. Because destroy
is never chosen on an object, the attached files are never cleaned upwards. If you lot desire to articulate the files, you can practise it in an after_teardown
callback. Doing information technology hither ensures that all connections created during the test are complete and you lot won't receive an error from Agile Storage saying it can't find a file.
class ApplicationSystemTestCase < ActionDispatch :: SystemTestCase # ... def after_teardown super FileUtils . rm_rf ( ActiveStorage :: Blob . service . root ) stop # ... end
If you're using parallel tests and the DiskService
, you should configure each procedure to use its own folder for Active Storage. This style, the teardown
callback will simply delete files from the relevant process' tests.
course ApplicationSystemTestCase < ActionDispatch :: SystemTestCase # ... parallelize_setup do | i | ActiveStorage :: Blob . service . root = " #{ ActiveStorage :: Blob . service . root } - #{ i } " end # ... end
If your arrangement tests verify the deletion of a model with attachments and you're using Active Task, set your test environment to employ the inline queue adapter so the purge job is executed immediately rather at an unknown time in the future.
# Use inline job processing to make things happen immediately config . active_job . queue_adapter = :inline
10.1.2 Integration tests
Similarly to System Tests, files uploaded during Integration Tests will not be automatically cleaned up. If you want to articulate the files, y'all can do information technology in an teardown
callback.
class ActionDispatch::IntegrationTest def after_teardown super FileUtils . rm_rf ( ActiveStorage :: Blob . service . root ) end terminate
If you're using parallel tests and the Disk service, you lot should configure each process to use its own folder for Active Storage. This way, the teardown
callback volition only delete files from the relevant process' tests.
form ActionDispatch::IntegrationTest parallelize_setup do | i | ActiveStorage :: Blob . service . root = " #{ ActiveStorage :: Hulk . service . root } - #{ i } " terminate finish
x.2 Adding attachments to fixtures
You can add attachments to your existing fixtures. First, you lot'll want to create a dissever storage service:
# config/storage.yml test_fixtures : service : Deejay root : <%= Rails.root.join("tmp/storage_fixtures") %>
This tells Agile Storage where to "upload" fixture files to, so it should be a temporary directory. By making it a different directory to your regular examination
service, you can separate fixture files from files uploaded during a examination.
Next, create fixture files for the Active Storage classes:
# active_storage/attachments.yml david_avatar : proper noun : avatar record : david (User) blob : david_avatar_blob
# active_storage/blobs.yml david_avatar_blob : <%= ActiveStorage::FixtureSet.blob filename : " david.png" , service_name : " test_fixtures" % >
So put a file in your fixtures directory (the default path is examination/fixtures/files
) with the corresponding filename. See the ActiveStorage::FixtureSet
docs for more than information.
Once everything is fix, y'all'll be able to access attachments in your tests:
class UserTest < ActiveSupport :: TestCase def test_avatar avatar = users ( :david ). avatar assert avatar . fastened? assert_not_nil avatar . download assert_equal one thousand , avatar . byte_size terminate end
10.2.1 Cleaning up fixtures
While files uploaded in tests are cleaned upwards at the end of each test, you simply need to clean up fixture files once: when all your tests complete.
If you're using parallel tests, call parallelize_teardown
:
form ActiveSupport::TestCase # ... parallelize_teardown do | i | FileUtils . rm_rf ( ActiveStorage :: Hulk . services . fetch ( :test_fixtures ). root ) end # ... finish
If y'all're not running parallel tests, use Minitest.after_run
or the equivalent for your exam framework (e.one thousand. after(:suite)
for RSpec):
# test_helper.rb Minitest . after_run practice FileUtils . rm_rf ( ActiveStorage :: Blob . services . fetch ( :test_fixtures ). root ) end
eleven Implementing Support for Other Deject Services
If you need to support a cloud service other than these, y'all will need to implement the Service. Each service extends ActiveStorage::Service
by implementing the methods necessary to upload and download files to the cloud.
12 Purging Unattached Uploads
There are cases where a file is uploaded just never attached to a tape. This can happen when using Direct Uploads. You can query for unattached records using the unattached scope. Beneath is an example using a custom rake chore.
namespace :active_storage do desc "Purges unattached Active Storage blobs. Run regularly." task purge_unattached: :environment do ActiveStorage :: Blob . unattached . where ( "active_storage_blobs.created_at <= ?" , 2 . days . ago ). find_each ( & :purge_later ) finish finish
The query generated by ActiveStorage::Blob.unattached
can be slow and potentially disruptive on applications with larger databases.
Feedback
Y'all're encouraged to help improve the quality of this guide.
Please contribute if yous see any typos or factual errors. To become started, you tin read our documentation contributions section.
You may also find incomplete content or stuff that is non up to date. Delight do add whatever missing documentation for principal. Make sure to cheque Edge Guides beginning to verify if the issues are already fixed or not on the main branch. Check the Ruby on Track Guides Guidelines for style and conventions.
If for whatever reason you spot something to fix merely cannot patch information technology yourself, please open up an issue.
And final simply not least, whatsoever kind of discussion regarding Ruby on Rails documentation is very welcome on the rubyonrails-docs mailing list.
Source: https://edgeguides.rubyonrails.org/active_storage_overview.html
0 Response to "Rails Upload File to Google Cloud Store"
Post a Comment