/plushcap/analysis/datastax/datastax-webcast-10-clever-astra-demos-build-cloud-native-apps-astra-cassandra-dbaas

[Webcast] 10 Clever Astra Demos: Build Cloud-Native Apps with the Astra DBaaS

What's this blog post about?

In this tutorial, we will be using Apache Cassandra to store data about restaurants in the Denver area. We will use DataStax Astra as our managed cloud service for running Cassandra and dsbulk to load data into it. Additionally, we will demonstrate how to perform geospatial queries on the data using a combination of CQL (Cassandra Query Language) and a third-party geohashing library. Finally, we will use NoSQL bench to generate workloads for testing performance. First, create an account on Astra and set up a new database instance with a name like "DenverRestaurants". Then, download the secure connect bundle from your newly created database instance. This bundle contains all the necessary information to connect to your Astra database securely using any DataStax driver. Next, install dsbulk on your local machine and use it to load data into your Astra database. To do this, first download a CSV file containing restaurant data for the Denver area from a public API or dataset. Then, create a new keyspace in your Astra database called "map_data" with two tables: "star" and "geohash". Now, use dsbulk to load the restaurant data into the "star" table using the following command: ```bash dsbulk load -url restaurants.csv -k map_data -t star -header true -maxThreads 4 ``` This will create a new row in the "star" table for each restaurant, with columns such as name, address, and coordinates. Next, we want to add geospatial functionality to our database so that we can easily find nearby restaurants based on their location. To do this, we will use a technique called geohashing, which involves converting latitude and longitude coordinates into an alphanumeric identifier known as a "geohash". This allows us to quickly search for all the restaurants within a certain area by simply looking up the corresponding geohash value. To add this functionality to our Astra database, we will create another table called "geohash" with columns for latitude, longitude, and a 5-digit geohash prefix. Then, we can use CQL (Cassandra Query Language) to query the "star" table based on these geohash values. For example, if we want to find all the restaurants within a certain area around the Denver Art Museum, we can first look up its coordinates and then convert them into a 5-digit geohash prefix using an online tool like geohash.org. We can then use this prefix in our CQL query to select only those rows from the "star" table that have matching values in their "geohash_five" column: ```cql SELECT star FROM map_data.star WHERE geohash_five = '9XJ64'; ``` This will return all the restaurants within this area, which we can then display on a map using JavaScript or another front-end framework. Finally, to test the performance of our Astra database under various workloads, we can use NoSQL bench - an open source tool for generating synthetic workloads and measuring their impact on Cassandra clusters. To run NoSQL bench against our Astra database, first install it using Docker: ```bash docker pull datastax/nosqlbenchmark ``` Then, create a new configuration file for your test scenario (e.g., "iot_baseline.yaml") and specify the necessary connection details for your Astra database, such as the secure connect bundle path, username, password, etc. Finally, run NoSQL bench with this configuration file to start generating workloads: ```bash docker run -v /path/to/secure_connect_bundle:/tmp/secure-connect-DenverRestaurants datastax/nosqlbenchmark -config=iot_baseline.yaml ``` This will begin running the specified test scenario against your Astra database, allowing you to monitor its performance and adjust settings as needed.

Company
DataStax

Date published
July 27, 2020

Author(s)
Matt Kennedy

Word count
7842

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.