Android Aws Call Function When Upload Finished

Web applications often crave the power to let users to upload files such as images, but e'er so frequently, this is the just functionality on the project that required an application server. Thousands of sites on the Internet could do good from a CDN infrastructure, but are currently hosted on slower and less secure infrastructure because of things like consumer pictures for a product page, resume uploads for the jobs folio etc... In this tutorial, yous will learn how to build a secure, serverless file upload system using the "serverless framework". If you're new to the serverless framework, bank check out our "Serverless Framework Tutorial": Part one and Role two.

Straight-to-S3 File Uploads

The ideal scenario from the point of view of performance and scalability would be to let your users to upload files directly to S3 (Uncomplicated Storage Service — a cloud storage service from AWS). It would be a highly scalable, reliable and fast solution that wouldn't consume whatsoever application server resources. Well, for obvious security reasons we can't just leave a S3 bucket wide open for anyone to upload anything on it — just what if we introduce in intermediary step, an API endpoint that our client application can call asking for "permission" for each new file upload? The API tin validate the asking (Is the request coming from our site? What type of file does it want to upload?) and then answer with a signed URL for a direct-to-s3 upload. Each returned URL is unique and valid for a unmarried usage, nether the specified weather.

The Serverless Projection

Having everything installed and setup, let's start by creating a new project:

              serverless create --template aws-nodejs --path imageupload`                          

If everything goes right, you should see the usual success message, and your base project files will be created.

              Serverless: Creating new Serverless service... Serverless: Creating the service in "/Users/cassiozen/Desktop/imageupload"  _______                             __ |   _   .-----.----.--.--.-----.----|  .-----.-----.-----. |   |___|  -__|   _|  |  |  -__|   _|  |  -__|__ --|__ --| |____   |_____|__|  \___/|_____|__| |__|_____|_____|_____| |   |   |             The Serverless Application Framework |       |                           serverless.com, v1.0.2  -------'   Serverless: Successfully created service with template: "aws-nodejs"                          

Provisioning the Upload S3 Saucepan

Let'due south start by provisioning the S3 saucepan that will be used for the paradigm uploads. Edit the serverless.yml configuration and add a new resource. You lot demand to whitelist the allowed CORS methods and origins. For this tutorial we volition also let the files be publicly accessible (READ):

                              # you can add together CloudFormation resource templates here                
resource :
Resources :
UploadBucket :
Type : AWS: :S3: :Bucket
Properties :
BucketName : slsupload
AccessControl : PublicRead
CorsConfiguration :
CorsRules :
- AllowedMethods :
- Go
- PUT
- Mail
- Caput
AllowedOrigins :
- "*"
AllowedHeaders :
- "*"

Remember that Amazon employs a very strict access policy in information technology'southward services — by default your Lambda functions won't have permission to do anything with this S3 bucket. So, scroll upward the serverless.yml file to add a new IAM office in the provider department:

                              provider                :                
name : aws
runtime : nodejs4.3
iamRoleStatements :
- Issue : "Allow"
Action :
- "s3:*"
Resources : "arn:aws:s3:::slsupload/*"

Your complete serverless.yml file should expect like this:

                              service                :                imageupload

provider :
name : aws
runtime : nodejs4.3
iamRoleStatements :
- Result : "Allow"
Activity :
- "s3:*"
Resources : "arn:aws:s3:::slsupload/*"

functions :
hullo :
handler : handler.hi

resources :
Resources :
UploadBucket :
Type : AWS: :S3: :Bucket
Properties :
BucketName : slsupload
AccessControl : PublicRead
CorsConfiguration :
CorsRules :
- AllowedMethods :
- GET
- PUT
- POST
- Head
AllowedOrigins :
- "*"
AllowedHeaders :
- "*"

requestUploadURL lambda part

At present nosotros'll create and configure the bodily lambda office and API endpoint. To begin with, we will need to install the aws-sdk package:

              npm install --relieve aws-sdk                          

Next, open the handler.js file and require the module:

                              'use strict'                ;                

var AWS = require ( 'aws-sdk' ) ;

Role Handler

On the requestUploadURL office handler, all we need to do is go an instance of AWS.S3 and phone call getSignedUrl to generate the signed upload URL. The getSignedUrl method accepts two parameters:

  • An performance, for which the URL will used for. The performance for uploading files is putObject.
  • A params object, specific for the operation you want to perform. The putObject operation requires two parameters:
    • Bucket: The name of the saucepan where the file will be uploaded to.
    • central: The name of the file you want to upload. You tin assign a completely new name if you lot want.

You can check the complete list of available parameters for the putObject operation on the official documentation page.

Only these ii parameters are required, but in our example we volition crave the client to pass the name and type of the file they want to upload. Nosotros will so generate an upload URL valid but for that specific file type. Information technology'south too very common to inquire for file size to decide whether or not to allow for the upload, simply for simplicity we will skip this in our example.

Bold the user will post a JSON with the file's proper name and type, here are the respective parameters for our putObject operation:

                              var                s3                =                new                AWS.S3                (                )                ;                
var params = JSON . parse (event.torso) ;

var s3Params = {
Saucepan : 'slsupload' ,
Key : params.name,
ContentType : params.blazon,
ACL : 'public-read' ,
} ;

Side by side, we volition phone call s3.getSignedUrl and store the returned upload url on a variable:

                              var                uploadURL                =                s3.                getSignedUrl                (                'putObject'                ,                s3Params)                ;                          

Finally, we volition invoke the office callback returning the signed upload URL:

                              callback                (                null                ,                {                
statusCode : 200 ,
headers : {
'Admission-Control-Allow-Origin' : 'https://world wide web.my-site.com'
} ,
body : JSON . stringify ( { uploadURL : uploadURL } ) ,
} )

Notice that we're restricting the service to requested only by "https://www.my-site.com". This is an additional condom measure, equally modern browsers won't let anyone initiate uploads originated on other domains. (If you desire, though, you can employ "*" to permit requests from any domain).

The complete source code (Gist):

              module.exports.                requestUploadURL                =                (                event,                  context,                  callback                )                =>                {                
var s3 = new AWS.S3 ( ) ;
var params = JSON . parse (consequence.trunk) ;

var s3Params = {
Bucket : 'slsupload' ,
Key : params.proper name,
ContentType : params.type,
ACL : 'public-read' ,
} ;

var uploadURL = s3. getSignedUrl ( 'putObject' , s3Params) ;

callback ( null , {
statusCode : 200 ,
headers : {
'Access-Control-Allow-Origin' : 'https://www.my-site.com'
} ,
body : JSON . stringify ( { uploadURL : uploadURL } ) ,
} )
}

Part event configuration.

The last step is setting up your function with an HTTP endpoint. Back on serverless.yml, add this to your functions section:

                              functions                :                
requestUploadURL :
handler : handler.requestUploadURL
events :
- http :
path : requestUploadURL
method : post
cors : true

The sample customer file.

At present, to upload a file directly to S3, all your customer code needs to to is get-go inquire for an upload URL then submit the hulk straight to S3. For example, let'due south create a bare-bones elevate-n-drop file share:

That's exactly what the sample lawmaking below does (Gist):

                                                <!                  DOCTYPE                  html                  >                                
<html lang = "en" >
<caput >
<title > A File Upload Demo </title >
<style >
html, body {
pinnacle : 100%;
margin : 0;
}
body {
font-family unit : 'Helvetica Neue' , Helvetica, Arial, sans-serif;
}
.aligner {
height : 100%;
display : flex;
marshal-items : center;
justify-content : eye;
flex-direction : column;
}
#drop {
tiptop : 200px;
width : 200px;
border-radius : 100px;
color : #fff;
background-colour : #baf;
font-size : 20px;
display : flex;
align-items : center;
}
</style >
</head >
<torso >
<div class = "aligner" >
<div id = "drib" > Drop files hither. </div >
<div id = "list" >
<h1 > Uploaded Files: </h1 >
</div >
</div >

<script type = "text/javascript" >
var drop = document. getElementById ( 'driblet' ) ;
var list = certificate. getElementById ( 'list' ) ;
var apiBaseURL = "https://74t3vol55c.execute-api.us-east-1.amazonaws.com/dev" ;

function abolish ( due east ) {
e. preventDefault ( ) ;
render imitation ;
}

role handleDrop ( e ) {
due east. preventDefault ( ) ;
var dt = e.dataTransfer;
var files = dt.files;
for ( var i= 0 ; i<files.length; i++ ) {
var file = files[i] ;
var reader = new FileReader ( ) ;
reader. addEventListener ( 'loadend' , function ( east ) {
fetch (apiBaseURL+ "/requestUploadURL" , {
method : "POST" ,
headers : {
'Content-Blazon' : 'awarding/json'
} ,
body : JSON . stringify ( {
name : file.name,
type : file.type
} )
} )
. and then ( function ( response ) {
return response. json ( ) ;
} )
. then ( part ( json ) {
return fetch (json.uploadURL, {
method : "PUT" ,
trunk : new Blob ( [reader.consequence] , { type : file.type} )
} )
} )
. so ( function ( ) {
var uploadedFileNode = document. createElement ( 'div' ) ;
uploadedFileNode.innerHTML = '<a href="//s3.amazonaws.com/slsupload/' + file.name + '">' + file.proper name + '</a>' ;
list. appendChild (uploadedFileNode) ;
} ) ;
} ) ;
reader. readAsArrayBuffer (file) ;
}
return false ;
}


drop. addEventListener ( 'dragenter' , cancel) ;
drop. addEventListener ( 'dragover' , abolish) ;
drop. addEventListener ( 'drop' , handleDrop) ;

</script >
</body >
</html >

At that place it is: A highly scalable, serverless prototype upload service. This is just a basic implementation of the straight-to-s3 concept, merely you can use information technology as a base and further extend and customize to your needs (like saving all upload file names on database, for example).

taylordinectich.blogspot.com

Source: https://www.netlify.com/blog/2016/11/17/serverless-file-uploads/

0 Response to "Android Aws Call Function When Upload Finished"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel