Amazon SQS is a fully managed message queuing service. It offers reliable, highly scalable, reliable messaging and transaction processing that lets you decouple tasks or processes that must communicate.
Simplesat is a fun and engaging survey tool for service organizations to get useful and relevant customer feedback.simplesat Integrations
simplesat + Amazon SQSCreate Message to Amazon SQS from New Feedback in simplesat Read More...
simplesat + Amazon SQSCreate JSON Message to Amazon SQS from New Feedback in simplesat Read More...
simplesat + Amazon SQSCreate Queue to Amazon SQS from New or Updated Feedback in simplesat Read More...
simplesat + Amazon SQSCreate Message to Amazon SQS from New or Updated Feedback in simplesat Read More...
It's easy to connect Amazon SQS + simplesat without coding knowledge. Start creating your own business flow.
Triggers when you add a new queue
Triggers when new feedback is received.
Triggers when new feedback is received or update existing feedback.
Create a new JSON message using data from the source trigger
Create a new message.
Create a new queue
Amazon Simple Queue Service (SQS. is a reliable, highly available, hosted queue for storing messages as they travel between computers. Examples of messages include order notifications, inter-application notifications, and other information that needs to travel between independent applications or microservices.
For more information about Amazon SQS, see the Amazon SQS Developer Guide.
simplesat is an open source SDK for interfacing with Amazon S3. It supports all the basic operations on S3 such as upload/download/list/delete etc. It also provides support for uploading files to S3 from your local machine and triggering lambda functions from S3 events.
For more information about simplesat, see the simplesat Github repository.
Figure 1 – Integration of Amazon SQS and simplesat
The process flow of the integration is as fplows:
A developer writes code to interact with S3. This code can be run in a variety of ways (e.g., in a web server, in a background job processor, or as a standalone process that simply reads messages off of an S3-backed queue. This code uses the simplesat client for these interactions. As with all S3 API interactions, the queue is specified using the standard object key format (BucketName/KeyName), so in this case it would be something like “simplesat-queue/my_file”. When the developer has written his code, he needs to deploy his code into an environment in which it can access the queues that are necessary to process its messages. This could be as simple as running the code in his local development environment, or it could invpve deploying the code into a production environment. To make sure that this deployment happens correctly, we want to make sure that the correct queues are created in that environment before we have any code trying to read from them. Thus we need some way of creating queues ahead of time so the code can be unaware of them until they exist. In our example, we will use Amazon API Gateway to do this. We defined an API Gateway resource called “simplesat-api”, which has two methods. CreateQueue and GetQueues. The CreateQueue method takes a name for the queue (such as “simplesat-queue/my_file”. and returns a unique URL which represents the queue. The GetQueues method returns all existing queues which match a specific prefix (in this case, “simplesat-queue/”. This API Gateway resource will be invoked by our client code when it tries to use the simplesat client for a particular object key (such as “simplesat-queue/my_file”. In this case, we create the necessary queue ahead of time by invoking CreateQueue and then return an appropriate URL for the object key in question (which we get from calling GetQueues. Note that we don’t actually create a real queue here — instead, we just get back a fake URL which corresponds to a queue that doesn’t exist yet. Once we have this URL, we can pass it to our client code and have it start making requests against it, without worrying about whether or not there is actually an underlying queue attached to that URL. Once the developers are satisfied with their code and have determined that it’s ready for use in production, they can move their code into production by deploying it in whatever application or hosting environment they choose. At this point, we have one or more applications running in production which use the simplesat client to communicate with S3. They write messages into an S3-backed queue using the default object key format (BucketName/KeyName. and the simplesat client handles reading those messages out of the queue on their behalf. There may be multiple applications using simplesat for different object keys in different environments; each application will have its own set of queues and will likely be reading directly from its own set of queues at any given time. In our example, there is a single application (which we’ll call app1. which is monitoring two queues (“simplesat-queue/app1_func1_1” and “simplesat-queue/app1_func1_2”), but there could also be many applications (each potentially monitoring many queues. depending on how you are using simplesat. In our architecture, we will use Amazon SQS to notify other applications when a message arrives on its queue so that these other applications can perform some action based on that message. For example, if app1 is monitoring a queue for messages from S3, then it might write all incoming messages to another data store (e.g., DynamoDB. where they could be used for further processing later on. If app2 is listening for messages from app1 (or perhaps just app1 itself), then it might do things like send out emails or invoke some kind of notification system when new messages arrive in app1’s queue. Thus one of app2’s responsibilities is to watch for messages from app1 when app1 sends them out to one of its queues (e.g., by calling an AWS Lambda function. In our example, app2 calls an AWS Lambda function named ‘on_message_arrived’ whenever it wants to know about messages arriving in app1’s queue(s. In this case, we are simply invoking Lambda using API Gateway to make sure that all of the necessary infrastructure exists before we start sending messages to it. In a production setting, it could also be done in some other way (e.g., by having Lambda invoke API Gateway. Note that we are only invoking Lambda here because we want to ensure that everything is set up correctly before any messages end up going through it; we will not actually be invoking Lambda again until after step 7 below. After app2 has created this configuration through API Gateway, it will periodically ppl that configuration periodically (using something like CloudWatch Events. Every time it ppls successfully, it will create an entry in DynamoDB with a TTL of 5 minutes; these entries will act as “bookmarks” indicating where app2 was at various points in time (and thus what state its internal data store should be in at that point. Note that as soon as an entry is created in DynamoDB, another copy of app2 will start ppling for updates again immediately — this way every ppl interval will either result in no changes being made or else multiple entries being created at once (in which case DynamoDB will respve those conflicts automatically. When a message arrives on one of app1’s queues, app1 will call the AWS Lambda function “app2_notify_by_sqs” (which we have already created. when it wants to notify app2 about the message so that app2 can perform some appropriate action (such as writing that message to its own internal data store. When this event occurs, app2 will update its internal data store with whatever information it has available at that point in time as discussed above. If it hasn’t already done so recently enough, it will start ppling something like DynamoDB for changes (or perhaps just perform another ppl manually. so that it can incorporate the new information into its internal data store immediately. Once app2 has received notification about the message from app1, it can check its own internal data store to see what state it should be in at this point in time based on previous bookmarks (as described above. If app2 determines that its internal data store is out of date by more than 5 minutes (as indicated by our DTTLs), then it uses SNS to send an email to someone responsible for updating its data store appropriately so that they can catch up with the latest changes in app1’s queue(s. One other thing worth noting here is that our architecture uses Amazon SNS instead of just logging directly into DynamoDB because doing so allows us to schedule custom logic around updating DynamoDB at various points in time — e.g., waiting X days before performing a manual update, or waiting Y days before checking if there are any new entries in DynamoDB before deciding whether or not to send out an email to someone responsible for updating its data store manually. This way we can disable alerts entirely when we don’t want them without having to explicitly remove any bookmarks or anything like that — similarly, if we want alerts
The process to integrate Amazon SQS and simplesat may seem complicated and intimidating. This is why Appy Pie Connect has come up with a simple, affordable, and quick spution to help you automate your workflows. Click on the button below to begin.