★ Pass on Your First TRY ★ 100% Money Back Guarantee ★ Realistic Practice Exam Questions
Free Instant Download NEW AWS-Certified-Database-Specialty Exam Dumps (PDF & VCE):
Available on:
https://www.certleader.com/AWS-Certified-Database-Specialty-dumps.html
Our pass rate is high to 98.9% and the similarity percentage between our AWS-Certified-Database-Specialty study guide and real exam is 90% based on our seven-year educating experience. Do you want achievements in the Amazon AWS-Certified-Database-Specialty exam in just one try? I am currently studying for the Amazon AWS-Certified-Database-Specialty exam. Latest Amazon AWS-Certified-Database-Specialty Test exam practice questions and answers, Try Amazon AWS-Certified-Database-Specialty Brain Dumps First.
Online Amazon AWS-Certified-Database-Specialty free dumps demo Below:
NEW QUESTION 1
A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.
Which step should be taken to troubleshoot this issue?
- A. Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address
- B. Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect
- C. Ensure that the RDS DB instance has not reached its maximum connections limit
- D. Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
Answer: D
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Concepts.General.SSL.Using.html
NEW QUESTION 2
A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.
Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?
- A. Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.
- B. Use a DynamoDB global table replica in another Regio
- C. Enable point-in-time recovery for both tables.
- D. Use a DynamoDB Accelerator table in another Regio
- E. Enable point-in-time recovery for the table.
- F. Create an AWS Backup plan and assign the DynamoDB table as a resource.
Answer: C
NEW QUESTION 3
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)
- A. Set DeletionProtection to True
- B. Set MultiAZ to True
- C. Set TerminationProtection to True
- D. Set DeleteAutomatedBackups to False
- E. Set DeletionPolicy to Delete
- F. Set DeletionPolicy to Retain
Answer: ACF
NEW QUESTION 4
A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours. To meet the company’s disaster recovery policies, the database backup needs to be copied into another Region. The company requires the solution to be cost-effective and operationally efficient.
What should a Database Specialist do to copy the database backup into a different Region?
- A. Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
- B. Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication to copy the snapshot into another Region
- C. Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region
- D. Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot of the read replica
Answer: C
Explanation:
System snapshot can't fulfill 6 hours requirement. You need to control it by script https://aws.amazon.com/blogs/database/%C2%AD%C2%AD%C2%ADautomating-cross-region-cross-account
NEW QUESTION 5
Amazon Aurora MySQL is being used by an ecommerce business to migrate its main application database. The firm is now doing OLTP stress testing using concurrent database connections. A database professional detected sluggish performance for several particular write operations during the first round of testing.
Examining the Amazon CloudWatch stats for the Aurora DB cluster revealed a CPU usage of 90%.
Which actions should the database professional take to determine the main cause of excessive CPU use and sluggish performance most effectively? (Select two.)
- A. Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
- B. Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
- C. Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.
- D. Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
- E. Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.
Answer: AC
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-instance-high-cpu/ https://aws.amazon.com/premiumsupport/knowledge-center/rds-mysql-slow-query/
NEW QUESTION 6
A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created.
Which of the following are possible reasons why the snapshot was not created? (Choose two.)
- A. A copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.
- B. A copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.
- C. The RDS maintenance window is not configured.
- D. The RDS DB instance is in the STORAGE_FULL state.
- E. RDS event notifications have not been enabled.
Answer: AD
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html
NEW QUESTION 7
An online advertising website uses an Amazon DynamoDB table with on-demand capacity mode as its data store. The website also has a DynamoDB Accelerator
(DAX) cluster in the same VPC as its web application server. The application needs to perform infrequent writes and many strongly consistent reads from the data store by querying the DAX cluster.
During a performance audit, a systems administrator notices that the application can look up items by using the DAX cluster. However, the QueryCacheHits metric for the DAX cluster consistently shows 0 while the QueryCacheMisses metric continuously keeps growing in Amazon CloudWatch.
What is the MOST likely reason for this occurrence?
- A. A VPC endpoint was not added to access DynamoDB.
- B. Strongly consistent reads are always passed through DAX to DynamoDB.
- C. DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
- D. A VPC endpoint was not added to access CloudWatch.
Answer: B
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.html
"If the request specifies strongly consistent reads, DAX passes the request through to DynamoDB. The results from DynamoDB are not cached in DAX. Instead, they are simply returned to the application."
NEW QUESTION 8
After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?
- A. The restored DB instance does not have Enhanced Monitoring enabled
- B. The production DB instance is using a custom parameter group
- C. The restored DB instance is using the default security group
- D. The production DB instance is using a custom option group
Answer: C
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html
NEW QUESTION 9
An online shopping company has a large inflow of shopping requests daily. As a result, there is a consistent load on the company’s Amazon RDS database. A database specialist needs to ensure the database is up and running at all times. The database specialist wants an automatic notification system for issues that may cause database downtime or for configuration changes made to the database.
What should the database specialist do to achieve this? (Choose two.)
- A. Create an Amazon CloudWatch Events event to send a notification using Amazon SNS on every API call logged in AWS CloudTrail.
- B. Subscribe to an RDS event subscription and configure it to use an Amazon SNS topic to send notifications.
- C. Use Amazon SES to send notifications based on configured Amazon CloudWatch Events events.
- D. Configure Amazon CloudWatch alarms on various metrics, such as FreeStorageSpace for the RDS instance.
- E. Enable email notifications for AWS Trusted Advisor.
Answer: BD
NEW QUESTION 10
A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company’s network bandwidth is available.
How should the company perform this data load?
- A. Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucke
- B. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
- C. Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucke
- D. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
- E. Use AWS DataSync to transfer the data from on premises to the S3 bucke
- F. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
- G. Use the AWS CLI to transfer the data from on premises to the S3 bucke
- H. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
Answer: C
Explanation:
"AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS storage services, and also between AWS storage services."
https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html
NEW QUESTION 11
Amazon RDS for Oracle with Transparent Data Encryption is used by a financial services organization (TDE). At all times, the organization is obligated to encrypt its data at rest. The decryption key must be widely distributed, and access to the key must be restricted. The organization must be able to rotate the encryption key on demand to comply with regulatory requirements. If any possible security vulnerabilities are discovered, the organization must be able to disable the key. Additionally, the company's overhead must be kept to a minimal.
What method should the database administrator use to configure the encryption to fulfill these specifications?
- A. AWS CloudHSM
- B. AWS Key Management Service (AWS KMS) with an AWS managed key
- C. AWS Key Management Service (AWS KMS) with server-side encryption
- D. AWS Key Management Service (AWS KMS) CMK with customer-provided material
Answer: D
Explanation:
https://docs.aws.amazon.com/whitepapers/latest/kms-best-practices/aws-managed-and-customer-managed-cmks
NEW QUESTION 12
A financial institution uses AWS to host its online application. Amazon RDS for MySQL is used to host the application's database, which includes automatic backups.
The program has corrupted the database logically, resulting in the application being unresponsive. The exact moment the corruption occurred has been determined, and it occurred within the backup retention period.
How should a database professional restore a database to its previous state prior to corruption?
- A. Use the point-in-time restore capability to restore the DB instance to the specified tim
- B. No changes to the application connection string are required.
- C. Use the point-in-time restore capability to restore the DB instance to the specified tim
- D. Change the application connection string to the new, restored DB instance.
- E. Restore using the latest automated backu
- F. Change the application connection string to the new, restored DB instance.
- G. Restore using the appropriate automated backu
- H. No changes to the application connection string are required.
Answer: B
Explanation:
When you perform a restore operation to a point in time or from a DB Snapshot, a new DB Instance is created with a new endpoint (the old DB Instance can be deleted if so desired). This is done to enable you to create multiple DB Instances from a specific DB Snapshot or point in time."
NEW QUESTION 13
An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.
What should a Database Specialist do in this situation to increase performance and return latency to sub- second levels?
- A. Increase the size of the DB instance storage
- B. Change the underlying EBS storage type to General Purpose SSD (gp2)
- C. Disable EBS optimization on the DB instance
- D. Change the DB instance to an instance class with a higher maximum bandwidth
Answer: D
Explanation:
https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html
NEW QUESTION 14
Amazon DynamoDB global tables are being used by a business to power an online gaming game. The game is played by gamers from all around the globe. As the game became popularity, the amount of queries to DynamoDB substantially rose. Recently, gamers have complained about the game's condition being inconsistent between nations. A database professional notices that the ReplicationLatency metric for many replica tables is set to an abnormally high value.
Which strategy will resolve the issue?
- A. Configure all replica tables to use DynamoDB auto scaling.
- B. Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.
- C. Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.
- D. Configure the table-level write throughput limit service quota to a higher value.
Answer: A
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_reqs_bestpractices.html
NEW QUESTION 15
A worldwide digital advertising corporation collects browser information in order to provide targeted visitors with contextually relevant pictures, websites, and connections. A single page load may create many events, each of which must be kept separately. A single event may have a maximum size of 200 KB and an average size of 10 KB. Each page load requires a query of the user's browsing history in order to deliver suggestions for targeted advertising. The advertising corporation anticipates daily page views of more than 1 billion from people in the United States, Europe, Hong Kong, and India. The information structure differs according to the event. Additionally, browsing information must be written and read with a very low latency to guarantee that consumers have a positive viewing experience.
Which database solution satisfies these criteria?
- A. Amazon DocumentDB
- B. Amazon RDS Multi-AZ deployment
- C. Amazon DynamoDB global table
- D. Amazon Aurora Global Database
Answer: C
NEW QUESTION 16
A database specialist is managing an application in the us-west-1 Region and wants to set up disaster recovery in the us-east-1 Region. The Amazon Aurora MySQL DB cluster needs an RPO of 1 minute and an RTO of 2 minutes.
Which approach meets these requirements with no negative performance impact?
- A. Enable synchronous replication.
- B. Enable asynchronous binlog replication.
- C. Create an Aurora Global Database.
- D. Copy Aurora incremental snapshots to the us-east-1 Region.
Answer: C
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-disaster-recovery.ht
NEW QUESTION 17
On a single Amazon RDS DB instance, a business hosts a MySQL database for its ecommerce application. Automatically saving application purchases to the database results in high-volume writes. Employees routinely create purchase reports for the company. The organization wants to boost database performance and minimize downtime associated with upgrade patching.
Which technique will satisfy these criteria with the LEAST amount of operational overhead?
- A. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.
- B. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.
- C. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.
- D. Add a read replica and promote it to an Amazon Aurora MySQL DB cluster maste
- E. Then enable Amazon Aurora Serverless.
Answer: C
NEW QUESTION 18
......
100% Valid and Newest Version AWS-Certified-Database-Specialty Questions & Answers shared by Allfreedumps.com, Get Full Dumps HERE: https://www.allfreedumps.com/AWS-Certified-Database-Specialty-dumps.html (New 270 Q&As)