★ Pass on Your First TRY ★ 100% Money Back Guarantee ★ Realistic Practice Exam Questions

Free Instant Download NEW AWS-Certified-Database-Specialty Exam Dumps (PDF & VCE):
Available on: https://www.certleader.com/AWS-Certified-Database-Specialty-dumps.html


Testking AWS-Certified-Database-Specialty Questions are updated and all AWS-Certified-Database-Specialty answers are verified by experts. Once you have completely prepared with our AWS-Certified-Database-Specialty exam prep kits you will be ready for the real AWS-Certified-Database-Specialty exam without a problem. We have Most recent Amazon AWS-Certified-Database-Specialty dumps study guide. PASSED AWS-Certified-Database-Specialty First attempt! Here What I Did.

Free AWS-Certified-Database-Specialty Demo Online For Amazon Certifitcation:

NEW QUESTION 1
A large retail company recently migrated its three-tier ecommerce applications to AWS. The company’s backend database is hosted on Amazon Aurora PostgreSQL. During peak times, users complain about longer page load times. A database specialist reviewed Amazon RDS Performance Insights and found a spike in IO:XactSync wait events. The SQL attached to the wait events are all single INSERT statements.
How should this issue be resolved?

  • A. Modify the application to commit transactions in batches
  • B. Add a new Aurora Replica to the Aurora DB cluster.
  • C. Add an Amazon ElastiCache for Redis cluster and change the application to write through.
  • D. Change the Aurora DB cluster storage to Provisioned IOPS (PIOPS).

Answer: A

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Reference.html "This wait most often arises when there is a very high rate of commit activity on the system. You can
sometimes alleviate this wait by modifying applications to commit transactions in batches. "
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/apg-waits.xactsync.html

NEW QUESTION 2
A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.
A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.
Which AWS service or feature will help automate and achieve this objective?

  • A. AWS Systems Manager Parameter Store
  • B. DB parameter group
  • C. AWS Config
  • D. AWS Secrets Manager

Answer: B

NEW QUESTION 3
A business is operating an on-premises application that is divided into three tiers: web, application, and MySQL database. The database is predominantly accessed during business hours, with occasional bursts of activity throughout the day. As part of the company's shift to AWS, a database expert wants to increase the availability and minimize the cost of the MySQL database tier.
Which MySQL database choice satisfies these criteria?

  • A. Amazon RDS for MySQL with Multi-AZ
  • B. Amazon Aurora Serverless MySQL cluster
  • C. Amazon Aurora MySQL cluster
  • D. Amazon RDS for MySQL with read replica

Answer: B

Explanation:
Amazon Aurora Serverless v1 is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. https://aws.amazon.com/rds/aurora/serverless/

NEW QUESTION 4
A company wants to automate the creation of secure test databases with random credentials to be stored safely for later use. The credentials should have sufficient information about each test database to initiate a connection and perform automated credential rotations. The credentials should not be logged or stored anywhere in an unencrypted form.
Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation template?

  • A. Create the database with the MasterUserName and MasterUserPassword properties set to the default value
  • B. Then, create the secret with the user name and password set to the same default value
  • C. Add aSecret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the databas
  • D. Finally, update the secret’s password value with a randomly generated string set by the GenerateSecretString property.
  • E. Add a Mapping property from the database Amazon Resource Name (ARN) to the secret AR
  • F. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString propert
  • G. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret.
  • H. Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property.Then, define the database user name in the SecureStringTemplate templat
  • I. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword propertie
  • J. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.
  • K. Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString propert
  • L. Add an SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database AR
  • M. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in the secret.

Answer: C

NEW QUESTION 5
A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users.
Which solution meets these requirements?

  • A. Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
  • B. Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
  • C. Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
  • D. Use Amazon Neptune for storage

Answer: A

NEW QUESTION 6
A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.
Which solution meets these requirements?

  • A. Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
  • B. Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.
  • C. Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
  • D. Change the DB clusters to the burstable instance family.

Answer: A

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html

NEW QUESTION 7
A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.
Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)

  • A. Grant least privilege to groups, users, and roles
  • B. Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database
  • C. Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations
  • D. Use policy conditions to restrict access to selective IP addresses
  • E. Use AccessList Controls policy type to restrict users for database instance deletion
  • F. Enable AWS CloudTrail logging and Enhanced Monitoring

Answer: ACD

Explanation:
https://aws.amazon.com/blogs/database/using-iam-multifactor-authentication-with-amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/security_iam_id-based-policy-htmlhttps://docs.aws

NEW QUESTION 8
A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database.
Which solution addresses these requirements?

  • A. Set the rds.force_ssl=0 parameter in DB parameter group
  • B. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.
  • C. Set the rds.force_ssl=1 parameter in DB parameter group
  • D. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.
  • E. Set the rds.force_ssl=0 parameter in DB parameter group
  • F. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.
  • G. Set the rds.force_ssl=1 parameter in DB parameter group
  • H. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.

Answer: D

Explanation:
PostgreSQL: sslrootcert=rds-cert.pem sslmode=[verify-ca | verify-full]

NEW QUESTION 9
A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off-peak hours.
The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.
What should a database specialist do to meet these requirements?

  • A. Use reserved capacit
  • B. Set it to the capacity levels required for peak daytime throughput.
  • C. Use provisioned capacit
  • D. Set it to the capacity levels required for peak daytime throughput.
  • E. Use provisioned capacit
  • F. Create an AWS Application Auto Scaling policy to update capacity based on consumption.
  • G. Use on-demand capacity.

Answer: C

Explanation:
On-demand mode is a good option if any of the following are true: You create new tables with unknown workloads. You have unpredictable application traffic. You prefer the ease of paying for only what you use. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.h
Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

NEW QUESTION 10
A company has an ecommerce web application with an Amazon RDS for MySQL DB instance. The marketing team has noticed some unexpected updates to the product and pricing information on the website, which is impacting sales targets. The marketing team wants a database specialist to audit future database activity to help identify how and when the changes are being made.
What should the database specialist do to meet these requirements? (Choose two.)

  • A. Create an RDS event subscription to the audit event type.
  • B. Enable auditing of CONNECT and QUERY_DML events.
  • C. SSH to the DB instance and review the database logs.
  • D. Publish the database logs to Amazon CloudWatch Logs.
  • E. Enable Enhanced Monitoring on the DB instance.

Answer: BD

Explanation:
https://aws.amazon.com/blogs/database/configuring-an-audit-log-to-capture-database-activities-for-amazon-rds

NEW QUESTION 11
A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well- Architected Framework review, a Database Specialist was given new security requirements.
Only certain on-premises corporate network IPs should connect to the DB instance. Connectivity is allowed from the corporate network only.
Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)

  • A. Modify the pg_hba.conf fil
  • B. Add the required corporate network IPs and remove the unwanted IPs.
  • C. Modify the associated security grou
  • D. Add the required corporate network IPs and remove the unwanted IPs.
  • E. Move the DB instance to a private subnet using AWS DMS.
  • F. Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
  • G. Disable the publicly accessible setting.
  • H. Connect to the DB instance using private IPs and a VPN.

Answer: BEF

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.ht

NEW QUESTION 12
A database specialist is constructing an AWS CloudFormation stack using AWS CloudFormation. The database expert wishes to avoid the stack's Amazon RDS ProductionDatabase resource being accidentally deleted.
Which solution will satisfy this criterion?

  • A. Create a stack policy to prevent update
  • B. Include €Effect€ : €ProductionDatabase€ and €Resource€€Deny€ in the policy.
  • C. Create an AWS CloudFormation stack in XML forma
  • D. Set xAttribute as false.
  • E. Create an RDS DB instance without the DeletionPolicy attribut
  • F. Disable termination protection.
  • G. Create a stack policy to prevent update
  • H. Include Effect, Deny, and Resource :ProductionDatabase in the policy.

Answer: D

Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html "When you set a stack policy, all resources are protected by default. To allow updates on all resources, we add an Allow statement that allows all actions on all resources. Although the Allow statement specifies all resources, the explicit Deny statement overrides it for the resource with the ProductionDatabase logical ID. This Deny statement prevents all update actions, such as replacement or deletion, on the ProductionDatabase resource."

NEW QUESTION 13
A company is going through a security audit. The audit team has identified cleartext master user password in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.
What should a database specialist do to mitigate this risk?

  • A. Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.
  • B. Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.
  • C. Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.
  • D. Remove the passwords from the CloudFormation template and store them in a separate fil
  • E. Replace the passwords by running CloudFormation using a sed command.

Answer: B

Explanation:
https://aws.amazon.com/blogs/infrastructure-and-automation/securing-passwords-in-aws-quick-starts-using-aws

NEW QUESTION 14
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?

  • A. In the same Region and VPC of the source DB instance
  • B. In the same Region and VPC as the target DB instance
  • C. In the same VPC and Availability Zone as the target DB instance
  • D. In the same VPC and Availability Zone as the source DB instance

Answer: C

Explanation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html#CHAP_ReplicationIn In fact, all the configurations list on above url prefer the replication instance putting into target vpc region / subnet / az.
https://docs.aws.amazon.com/dms/latest/sbs/CHAP_SQLServer2Aurora.Steps.CreateReplicationInstance.html

NEW QUESTION 15
A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.
Which action will allow AVS DMS to perform the replication?

  • A. Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.
  • B. Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.
  • C. Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.
  • D. Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.

Answer: A

Explanation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html

NEW QUESTION 16
A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.
What change should the Database Specialist make to enable the migration?

  • A. Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
  • B. Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
  • C. Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
  • D. Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)

Answer: A

Explanation:
"requires minimal downtime when the RDS DB instance goes live" in order to do CDC: "you must first ensure that ARCHIVELOG MODE is on to provide information to LogMiner. AWS DMS uses LogMiner to read information from the archive logs so that AWS DMS can capture changes"
https://docs.aws.amazon.com/dms/latest/sbs/chap-oracle2postgresql.steps.configureoracle.html "If you want to capture and apply changes (CDC), then you also need the following privileges."

NEW QUESTION 17
A company’s database specialist disabled TLS on an Amazon DocumentDB cluster to perform benchmarking tests. A few days after this change was implemented, a database specialist trainee accidentally deleted multiple tables. The database specialist restored the database from available snapshots. An hour after restoring the cluster, the database specialist is still unable to connect to the new cluster endpoint.
What should the database specialist do to connect to the new, restored Amazon DocumentDB cluster?

  • A. Change the restored cluster’s parameter group to the original cluster’s custom parameter group.
  • B. Change the restored cluster’s parameter group to the Amazon DocumentDB default parameter group.
  • C. Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.
  • D. Run the syncInstances command in AWS DataSync.

Answer: A

Explanation:
You can't modify the parameter settings of the default parameter groups. You can use a DB parameter group to act as a container for engine configuration values that are applied to one or more DB instances. If you create a DB instance without specifying a DB parameter group, the DB instance uses a default DB parameter group. Each default DB parameter group contains database engine defaults and Amazon RDS system defaults. You can't modify the parameter settings of a default parameter group. Instead, you create your own parameter group where you choose your own parameter settings. Not all DB engine parameters can be changed in a parameter group that you create.

NEW QUESTION 18
......

P.S. Easily pass AWS-Certified-Database-Specialty Exam with 270 Q&As Downloadfreepdf.net Dumps & pdf Version, Welcome to Download the Newest Downloadfreepdf.net AWS-Certified-Database-Specialty Dumps: https://www.downloadfreepdf.net/AWS-Certified-Database-Specialty-pdf-download.html (270 New Questions)