Hot questions for Using Cassandra in spring data

Question:

I am trying to configure spring data with cassandra. But I am getting bellow error , when my app is deploying in tomcat.

When I check the connection, it is available to the given port. (127.0.0.1:9042). I have include stack trace and spring configuration bellow. Does anyone having idea on this error?

Full stack trace :

2015-12-06 17:46:25 ERROR web.context.ContextLoader:331 - Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession': Invocation of init method failed; nested exception is com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1572)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
    at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:736)
    at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:759)
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
    at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:434)
    at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:306)
    at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:106)
    at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4994)
    at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5492)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
    at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
    at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
    at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1245)
    at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1895)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
    at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
    at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
    at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1230)
    at com.datastax.driver.core.Cluster.init(Cluster.java:157)
    at com.datastax.driver.core.Cluster.connect(Cluster.java:245)
    at com.datastax.driver.core.Cluster.connect(Cluster.java:278)
    at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.afterPropertiesSet(CassandraCqlSessionFactoryBean.java:82)
    at org.springframework.data.cassandra.config.CassandraSessionFactoryBean.afterPropertiesSet(CassandraSessionFactoryBean.java:43)

===================================================================

Spring Configuration :

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans ...>

    <cassandra:cluster id="cassandraCluster"
                       contact-points="127.0.0.1" port="9042" />
    <cassandra:converter />

    <cassandra:session id="cassandraSession" cluster-ref="cassandraCluster"
                       keyspace-name="blood" />

    <cassandra:template id="cqlTemplate" />


    <cassandra:repositories base-package="com.blood.dao.nosql" />
    <cassandra:mapping entity-base-packages="com.blood.domain.nosql" />

</beans:beans>

Answer:

The problem is that Spring Data Cassandra (as of December 2015 when I write this) does not provide support for Cassandra 3.x. Here's an excerpt from a conversation with one of the developers in the #spring channel on freenode:

[13:49] <_amicable> Hi all, does anybody know if spring data cassandra supports cassandra 3.x? All dependencies & datastax drivers seem to be 2.x

[13:49] <@_ollie> amicable: Not in the near future.

[13:49] <_amicable> _ollie: thanks.

[13:50] <_amicable> I'll go and look at the relative merits of 2.x vs 3.x then ;)

[13:51] <@_ollie> SD Cassandra is a community project (so far) and its progress highly depends on how much time the developers can actually spend on it.

[13:51] <@_ollie> We will have someone joining the team in February 2016 to get the project more closely aligned to the core Spring Data projects.

Question:

Using Cassandra, I want to create keyspace and tables dynamically using Spring Boot application. I am using Java based configuration.

I have an entity annotated with @Table whose schema I want to be created before application starts up since it has fixed fields that are known beforehand.

However depending on the logged in user, I also want to create additional tables for those user dynamically and be able to insert entries to those tables.

Can somebody guide me to some resources that I can make use of or point me in right direction in how to go about solving these issues. Thanks a lot for help!


Answer:

The easiest thing to do would be to add the Spring Boot Starter Data Cassandra dependency to your Spring Boot application, like so...

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-cassandra</artifactId>
  <version>1.3.5.RELEASE</version>
</dependency>

In addition, this will add the Spring Data Cassandra dependency to your application.

With Spring Data Cassandra, you can configure your application's Keyspace(s) using the CassandraClusterFactoryBean (or more precisely, the subclass... CassandraCqlClusterFactoryBean) by calling the setKeyspaceCreations(:Set) method.

The KeyspaceActionSpecification class is pretty self-explanatory. You can even create one with the KeyspaceActionSpecificationFactoryBean, add it to a Set and then pass that to the setKeyspaceCreations(..) method on the CassandraClusterFactoryBean.

For generating the application's Tables, you essentially just need to annotate your application domain object(s) (entities) using the SD Cassandra @Table annotation, and make sure your domain objects/entities can be found on the application's CLASSPATH.

Specifically, you can have your application @Configuration class extend the SD Cassandra AbstractClusterConfiguration class. There, you will find the getEntityBasePackages():String[] method that you can override to provide the package locations containing your application domain object/entity classes, which SD Cassandra will then use to scan for @Table domain object/entities.

With your application @Table domain object/entities properly identified, you set the SD Cassandra SchemaAction to CREATE using the CassandraSessionFactoryBean method, setSchemaAction(:SchemaAction). This will create Tables in your Keyspace for all domain object/entities found during the scan, providing you identified the proper Keyspace on your CassandraSessionFactoryBean appropriately.

Obviously, if your application creates/uses multiple Keyspaces, you will need to create a separate CassandraSessionFactoryBean for each Keyspace, with the entityBasePackages configuration property set appropriately for the entities that belong to a particular Keyspace, so that the associated Tables are created in that Keyspace.

Now...

For the "additional" Tables per user, that is quite a bit more complicated and tricky.

You might be able to leverage Spring Profiles here, however, profiles are generally only applied on startup. If a different user logs into an already running application, you need a way to supply additional @Configuration classes to the Spring ApplicationContext at runtime.

Your Spring Boot application could inject a reference to a AnnotationConfigApplicationContext, and then use it on a login event to programmatically register additional @Configuration classes based on the user who logged into the application. You need to follow your register(Class...) call(s) with an ApplicationContext.refresh().

You also need to appropriately handle the situation where the Tables already exist.

This is not currently supported in SD Cassandra, but see DATACASS-219 for further details.

Technically, it would be far simpler to create all the possible Tables needed by the application for all users at runtime and use Cassandra's security settings to restrict individual user access by role and assigned permissions.

Another option might be just to create temporary Keyspaces and/or Tables as needed when a user logs in into the application, drop them when the user logs out.

Clearly, there are a lot of different choices here, and it boils down more to architectural decisions, tradeoffs and considerations then it does technical feasibility, so be careful.

Hope this helps.

Cheers!

Question:

I have cassandra table with a date type column as follows:

create table people
(
   id int primary key, 
   name text, 
   email text, 
   dob date
);

I am using SpringBoot 1.5.2 + Spring Data Cassandra Starter.

@Table("people")
public class Person {
    @PrimaryKey
    Integer id;
    private String name;
    private String email;
    private java.util.Date dob;
    //setters and getters
}

public interface PersonRepository extends CrudRepository<Person, Integer>{

}

I am inserting new Person as follows:

personRepository.save(new Person(1, "Siva","siva@gmail.com", new java.util.Date()));

It is throwing the following error:

Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Expected 4 byte long for date (8)
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:136) ~[cassandra-driver-core-3.1.4.jar:na]
    at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179) ~[cassandra-driver-core-3.1.4.jar:na]
    at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:177) ~[cassandra-driver-core-3.1.4.jar:na]
    at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:46) ~[cassandra-driver-core-3.1.4.jar:na]

But if I make dob column type to timestamp then it is working fine. Is it possible to have date type column and use java.util.Date type properties?

P.s: Even if I use java.sql.Date I am getting the same error.


Answer:

Use com.datastax.driver.core.LocalDate

You can use any of these method to get LocalDate from java.util.Date

  • LocalDate.fromYearMonthDay(2017, 03, 28)
  • LocalDate.fromMillisSinceEpoch(new Date().getTime())

Or you could create your own codec that will allow you to insert java.util.Date into Cassandra date type.

You can start like the below one :

public class DateCodec extends TypeCodec<Date> {

    private final TypeCodec<LocalDate> innerCodec;

    public DateCodec(TypeCodec<LocalDate> codec, Class<Date> javaClass) {
        super(codec.getCqlType(), javaClass);
        innerCodec = codec;
    }

    @Override
    public ByteBuffer serialize(Date value, ProtocolVersion protocolVersion) throws InvalidTypeException {
        return innerCodec.serialize(LocalDate.fromMillisSinceEpoch(value.getTime()), protocolVersion);
    }

    @Override
    public Date deserialize(ByteBuffer bytes, ProtocolVersion protocolVersion) throws InvalidTypeException {
        return new Date(innerCodec.deserialize(bytes, protocolVersion).getMillisSinceEpoch());
    }

    @Override
    public Date parse(String value) throws InvalidTypeException {
        return new Date(innerCodec.parse(value).getMillisSinceEpoch());
    }

    @Override
    public String format(Date value) throws InvalidTypeException {
        return value.toString();
    }

}

When creating connectin you have to register :

CodecRegistry codecRegistry = new CodecRegistry();
codecRegistry.register(new DateCodec(TypeCodec.date(), Date.class));
Cluster.builder().withCodecRegistry(codecRegistry).build();

For more : http://docs.datastax.com/en/developer/java-driver/3.1/manual/custom_codecs/

Question:

I have the following column family:

@Table(value = "request_event")
public class RequestEvent {

    @PrimaryKeyColumn(name = "day_requested", ordinal = 0, type = PrimaryKeyType.PARTITIONED)
    private LocalDate dayRequested;

    @PrimaryKeyColumn(name = "date_requested", ordinal = 1, type = PrimaryKeyType.CLUSTERED, ordering = Ordering.DESCENDING)
    private LocalDateTime dateRequested;

    ...
}

which is stored and accessed by a repository:

@Repository
public interface RequestEventRepository extends CrudRepository<RequestEvent, LocalDateTime> {
}

Unfortunately requestEventRepository.findOne(localDate) is throwing an exception, probably because it is returning multiple results. How can I fix this? Also, how can all results from a particular day be retrieved?


Answer:

You have two options to represent compound keys with Spring Data Cassandra:

  1. Using @PrimaryKeyColumn within the domain type (like you did).
  2. Using a @PrimaryKeyClass to represent the primary key and embed it in the domain type.

Spring Data repositories accept a single ID type. Therefore it's not possible to just declare LocalDateTime as id. If you want to stick to @PrimaryKeyColumn within the domain type, the use MapId as id type:

@Table(value = "request_event")
public class RequestEvent {

    @PrimaryKeyColumn(name = "day_requested", ordinal = 0,
            type = PrimaryKeyType.PARTITIONED) 
    private LocalDate dayRequested;

    @PrimaryKeyColumn(name = "date_requested", ordinal = 1, type = PrimaryKeyType.CLUSTERED,
            ordering = Ordering.DESCENDING) 
    private LocalDateTime dateRequested;

}

public interface RequestEventRepository extends CrudRepository<RequestEvent, MapId> {}

MapId mapId = BasicMapId.id("dayRequested", …).with("dateRequested", …);

RequestEvent loaded = eventRepository.findOne(mapId);

If you decide to represent your primary key as a value object, then you need to adjust your domain type slightly:

@PrimaryKeyClass
public class Key implements Serializable {

    @PrimaryKeyColumn(name = "day_requested", ordinal = 0,
            type = PrimaryKeyType.PARTITIONED) 
    private LocalDate dayRequested;

    @PrimaryKeyColumn(name = "date_requested", ordinal = 1, type = PrimaryKeyType.CLUSTERED,
            ordering = Ordering.DESCENDING) 
    private LocalDateTime dateRequested;

}

@Table(value = "request_event")
public class RequestEvent {

    @PrimaryKey 
    private Key key;

}

public interface RequestEventRepository extends CrudRepository<RequestEvent, Key> {}

eventRepository.findOne(new Key(…))

Question:

I am attempting to persist a java.time.LocalDateTime object in my Cassandra database and keep it timezone agnostic. I am using Spring Data Cassandra to do this.

The problem is that somewhere along the line, something is treating these LocalDateTime objects as if they are in the timezone of my server, and offsetting them to UTC time when it stores them in the database.

Is this a bug or a feature? Can I work around it in some way?

Configuration:

@Configuration
@EnableCassandraRepositories(
    basePackages = "my.base.package")
public class CassandraConfig extends AbstractCassandraConfiguration{

    @Override
    protected String getKeyspaceName() {
        return "keyspacename";
    }

    @Bean
    public CassandraClusterFactoryBean cluster() {
        CassandraClusterFactoryBean cluster =
            new CassandraClusterFactoryBean();
        cluster.setContactPoints("127.0.0.1");
        cluster.setPort(9142);
        return cluster;
    }

    @Bean
    public CassandraMappingContext cassandraMapping()
        throws ClassNotFoundException {
        return new BasicCassandraMappingContext();
    }
}

Booking record I wish to persist:

@Table("booking")
public class BookingRecord {
    @PrimaryKeyColumn(
        ordinal = 0,
        type = PrimaryKeyType.PARTITIONED
    )
    private UUID bookingId = null;

    @PrimaryKeyColumn(
        ordinal = 1,
        type = PrimaryKeyType.CLUSTERED,
        ordering = Ordering.ASCENDING
    )
    private LocalDateTime startTime = null;
    ...
}

Simple Repository Interface:

@Repository
public interface BookingRepository extends CassandraRepository<BookingRecord> { }

Save Call:

...

@Autowired
BookingRepository bookingRepository;

...

public void saveBookingRecord(BookingRecord bookingRecord) {
    bookingRepository.save(bookingRecord);
}

Here is the string used to populate the starttime in BookingRecord:

"startTime": "2017-06-10T10:00:00Z"

And here is the output from cqlsh after the timestamp has been persisted:

cqlsh:keyspacename> select * from booking ;

 bookingid                            | starttime               
--------------------------------------+--------------------------------
 8b640c30-4c94-11e7-898b-6dab708ec5b4 | 2017-06-10 15:00:00.000000+0000 

Answer:

I do actually want to use LocalDateTime and LocalDate in my project, rather than java.util.Date, since they are newer and have more attractive functionality.

After much searching I have found a workaround.

First, you must create custom implementations of Spring's Converter interface as follows:

One for Date to LocalDateTime:

public class DateToLocalDateTime implements Converter<Date, LocalDateTime> {

    @Override
    public LocalDateTime convert(Date source) {
        return source == null ? null : LocalDateTime.ofInstant(source.toInstant(), ZoneOffset.UTC);
    }

}

And one for LocalDateTime to Date:

public class LocalDateTimeToDate implements Converter<LocalDateTime, Date> {

    @Override
    public Date convert(LocalDateTime source) {
        return source == null ? null : Date.from(source.toInstant(ZoneOffset.UTC));
    }

}

Finally, you must override the customConversions method in CassandraConfig as follows:

@Configuration
@EnableCassandraRepositories(basePackages = "my.base.package")
public class CassandraConfig extends AbstractCassandraConfiguration{

    @Override
    protected String getKeyspaceName() {
        return "keyspacename";
    }

    @Override
    public CustomConversions customConversions() {
        List<Converter> converters = new ArrayList<>();

        converters.add(new DateToLocalDateTime());
        converters.add(new LocalDateTimeToDate());

        return new CustomConversions(converters);
    }

    @Bean
    public CassandraClusterFactoryBean cluster() {
        CassandraClusterFactoryBean cluster =
            new CassandraClusterFactoryBean();
        cluster.setContactPoints("127.0.0.1");
        cluster.setPort(9142);
        return cluster;
    }

    @Bean
    public CassandraMappingContext cassandraMapping()
        throws ClassNotFoundException {
        return new BasicCassandraMappingContext();
    }
}

Thanks to mp911de for putting me in the ballpark of where to look for the solution!

Question:

I have two model classes:

public class AlertMatchesDTO implements Serializable
{
  private static final long serialVersionUID = -3704734448105124277L;

  @PrimaryKey
  private String alertOid;

  @Column("matches")
  private List<HotelPriceDTO> matches;
...
}

public class HotelPriceDTO implements Serializable
{
  private static final long serialVersionUID = -8751629882750913707L;

  private Long hotelOid;
  private double priceByNight;
  private Date checkIn;
  private Date checkOut;
...
}

and I want to persist instances of the first class in a Cassandra column family using Spring Data. In particular using Cassandra template like this:

...
cassandraTemplate.insert(dto, writeOptions); 
...

and Spring Data have problems serializing List<HotelPriceDTO>. What I think I need is a way to tell cassandraTemplate how to convert the type. In the official documentation, there is a chapter telling that I have to use CassandraMappingConverter and MappingCassandraConverter, but they do not provide an example yet.

My question is: is there an example of how to register a converter like this (in the test code of the project, may be?) or any other example I can use while the official documentation is completed? Thanks in advance.


Answer:

Hate to say this, but you should RTFM at http://docs.spring.io/spring-data/cassandra/docs/1.1.0.RELEASE/reference/html/.

Having said that, I noticed the DTO suffixes on your class names, which implies to me that you may not have a domain model, only a service layer with DTOs. If that's the case, you might consider defining the mappings yourself as RowMapper implementations and simply use CqlTemplate without the bells & whistles of Spring Data Cassandra. If you choose to fuse the architectural concepts of DTO and entity (entity being a persistent domain object), you're free to use Spring Data Cassandra along with the mapping metadata required (@Table, @PrimaryKeyColumn, etc). Your choice.

See http://goo.gl/gPBFpu for more reading on the subject of entities v. DTOs.

Question:

I want to check if a column exists in cassandra table in java,and then perform an action if it exists.How do I do that?


Answer:

You can get table definition via Metadata class. Something like:

Column column = cluster.getMetadata().getKeyspace("ks-name")
   .getTable("table-name").getColumn("column-name");
if (column != null) {
   // do your stuff
}

Question:

I don't understand how to achieve very simple goal with Spring Data Cassandra.

I want to execute an "INSERT" statement multiple times with different parameter values. I don't have mapped domain class at the moment, so I use CqlOperations interface provided by Spring Data.

When I just use execute(String cql, Object... args), Cassandra driver complains about "Re-preparing already prepared query is generally an anti-pattern and will likely affect performance. Consider preparing the statement only once". Because Spring Data uses SimplePreparedStatementCreator. But I don't see any way to tell Spring Data to use CachedPreparedStatementCreator instead. All I see is execute(PreparedStatementCreator psc) method which does not allow me to provide parameters values.

So, is there any way to either tell Spring Data to use proper statement cache or to achieve something similar to execute(PreparedStatementCreator, Object...)?


Answer:

CqlTemplate exposes callback and customization hooks that allow for tailoring some of its functionality to the needs of your application.

CqlTemplate does intentionally come without caching as caching leads to time vs. space considerations. Spring Data Cassandra cannot make decisions as we cannot assume what applications typically require.

Spring Data Cassandra's package core.cql.support ships with support for CachedPreparedStatementCreator and a PreparedStatementCache that you can use for that purpose.

Subclass CqlTemplate and override its newPreparedStatementCreator(…) method to specify which PreparedStatementCreator to use. The following example shows an example for a cache with infinite retention:

public class MyCachedCqlTemplate extends CqlTemplate {

    PreparedStatementCache cache = MapPreparedStatementCache.create();

    @Override
    protected PreparedStatementCreator newPreparedStatementCreator(String cql) {
        return CachedPreparedStatementCreator.of(cache, cql);
    }

}

Question:

Can someone help me to find out how to insert Cassandra UDT data using Spring POJO class?

I created one POJO class to map Cassandra's Table and Created one another class for Cassandra UDT, but when i inserted main POJO class that map Cassandra's table than it's not recognized another POJO class( map Cassandra's UDT). I have also written anotation on every class and on every class object .

Here is My one POJO class :-

package com.bdbizviz.smb.model.entity;

import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;

import com.bdbizviz.smb.cassandra.udt.CoverPhoto;
import com.bdbizviz.smb.cassandra.udt.Location;
import com.bdbizviz.smb.cassandra.udt.MailingAddress;
import com.bdbizviz.smb.cassandra.udt.Reference;
import com.bdbizviz.smb.cassandra.udt.RestaurantServices;
import com.bdbizviz.smb.cassandra.udt.RestaurantSpecial;
import com.bdbizviz.smb.cassandra.udt.VoipInfo;
import com.datastax.driver.mapping.annotations.Frozen;


@Table("source")
public class Sources {


Integer likes;
Integer followers;
Integer rating;
String last_processedtime;
String filter_str;
String filter_type;
String category;

@PrimaryKeyColumn(ordinal=0,type=PrimaryKeyType.PARTITIONED)
String admin;

@PrimaryKeyColumn(ordinal=1,type=PrimaryKeyType.CLUSTERED)
String name;

@PrimaryKeyColumn(ordinal=2,type=PrimaryKeyType.CLUSTERED)
String source;

@PrimaryKeyColumn(ordinal=3,type=PrimaryKeyType.CLUSTERED)
Integer id;

@PrimaryKeyColumn(ordinal=4,type=PrimaryKeyType.CLUSTERED)
String type;

Integer delete_flag;
Integer evaluated;

@Frozen
Reference fb_reference_id_name ;

@Frozen
MailingAddress fb_mailingaddress;

@Frozen
CoverPhoto fb_coverphoto ;

@Frozen
VoipInfo fb_voipinfo;

@Frozen
RestaurantServices fb_restaurantservice;

@Frozen
RestaurantSpecial fb_restaurantspecialties;

@Frozen
Location fb_location;

/* Setter and Getter */
}

Another POJO class :-

package com.bdbizviz.smb.cassandra.udt;

import org.springframework.data.annotation.Persistent;
import com.datastax.driver.mapping.annotations.Field;
import com.datastax.driver.mapping.annotations.UDT;

@UDT(name = "reference" ,keyspace="smb")
public class Reference {

@Field(name="id")
String id ;

@Field(name="name")
String name;

/* Setter and Getter */
}

Answer:

Spring Cassandra Driver is not even support UDT feature. You have to use Cassandra's custom driver for this feature.

Here is the maven dependency for Cassandra Driver

<dependency>
    <groupId>com.datastax.cassandra</groupId>
    <artifactId>cassandra-driver-dse</artifactId>
    <version>2.1.6</version>
</dependency>

Question:

As per the documentation : https://docs.spring.io/spring-data/cassandra/docs/2.1.4.RELEASE/reference/html/#repositories.limit-query-result

Spring cassandra data have made it easy to get the pagination info. But I can't get this to work.

Repo, Call and Errors:

1. Reactive Call

Repo:

public interface MyRepository extends ReactiveCassandraRepository<MyClass, String> {
  @Query("select * from my_keyspace.my_table where solr_query = ?0")
  Mono<Slice<MyClass>> findMono(String solrQuery, Pageable page);
}

Call:

 Mono<Slice<MyClass>>  repository.findMono(queryString, CassandraPageRequest.first(20));

Error:

"exceptionDescription":"org.springframework.core.codec.CodecException: Type definition error: [simple type, class com.datastax.driver.core.PagingState]; nested exception is com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class com.datastax.driver.core.PagingState and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: org.springframework.data.domain.SliceImpl[\"pageable\"]->org.springframework.data.cassandra.core.query.CassandraPageRequest[\"pagingState\"])","lines":["org.springframework.http.codec.json.AbstractJackson2Encoder.encodeValue(AbstractJackson2Encoder.java:175)","org.springframework.http.codec.json.AbstractJackson2Encoder.lambda$encode$0(AbstractJackson2Encoder.java:122)","reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:100)","reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:67)","reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)","reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onNext(FluxDefaultIfEmpty.java:92)","reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1476)","reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241)","reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:121)","reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1476)","reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:118)","reactor.core.publisher.FluxTake$TakeFuseableSubscriber.onComplete(FluxTake.java:424)","reactor.core.publisher.FluxTake$TakeFuseableSubscriber.onNext(FluxTake.java:404)","reactor.core.publisher.FluxIterable$IterableSubscription.fastPath(FluxIterable.java:311)","reactor.core.publisher.FluxIterable$IterableSubscription.request(FluxIterable.java:198)"],

2. Reactive with ReactiveSortingRepository

Repo:

public interface LocationRepository extends ReactiveSortingRepository<MyClass, String> {
}

Call:

 repository.findAll(CassandraPageRequest.first(20))

Error:

Syntax error: findAll can't be applied to CassandraPageRequest.

3. Simple call to get the page.

Repo:

public interface MyRepository extends CassandraRepository<MyClass, MyClassKey> {
Page<MyClass> findByKeyTerminalIdAndSolrQuery(String solrQuery, Pageable page);
}

Error while starting:

Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Page queries are not supported. Use a Slice query.

4. Using PagingAndSortingRepository

Repo:

public interface MyRepository extends PagingAndSortingRepository<MyClass, MyClassKey> {

}

Call:

   Page<Vessel> vessels = repository.findAll(CassandraPageRequest.first(10));

Error:

springframework.data.mapping.PropertyReferenceException: No property findAll found for type MyClass!


Answer:

Welcome to Stack Overflow.

The first example is the appropriate one:

public interface MyRepository extends ReactiveCassandraRepository<MyClass, String> {
  @Query("select * from my_keyspace.my_table where solr_query = ?0")
  Mono<Slice<MyClass>> findMono(String solrQuery, Pageable page);
}

Mono<Slice<MyClass>>  repository.findMono(queryString, CassandraPageRequest.first(20));

The issue is that Jackson cannot encode a SliceImpl (implementation of Slice) as you're passing it to WebFlux (according to the stack trace). So the query yields the right result but you need to pass on the Slice content, not the Slice itself if you want to JSON-encode it.

On a related note: ReactiveCassandraRepository does not implement ReactiveSortingRepository because Casandra queries with a Sort argument require always a WHERE clause. Looking at ReactiveSortingRepository, you'll see a findAll(Sort) method that does not take a filter criterion:

public interface ReactiveSortingRepository<T, ID> extends ReactiveCrudRepository<T, ID> {
    Flux<T> findAll(Sort sort);
}

Question:

I am trying to find the answer to a simple question. Let say I have a table which I store content in it. The content is just a string. I am trying to find what is my max length for this specific one? I was reading that "text" type is just an alias to varchar. Is varchar length only 255 or it can be more?


Answer:

You can use TEXT column in your database ( about 64KB characters).

As you know String in java has 2^31-1 characters

Question:

Suppose I have a users table in cassandra called 'UserPrincipal', the repository will look something like the following

public interface UserRepository extends CassandraRepository<UserPrincipal> 
{
   @Query("SELECT * FROM UserPrincipal WHERE email = ?0")
   UserPrincipal findByEmailAddress(String emailAddress);   
}

If I need to query the table with username for example, I have to denormalize the table and create a duplicate and let's call it UserPrincipalByUsername which is identical to the first one and only different with the primary key, now, can I use the following Interface as a repository? and what about saving/removing a user to/from both tables simultaneously to maintain data consistency?

public interface UserRepository extends CassandraRepository<UserPrincipal> 
{
   @Query("SELECT * FROM UserPrincipal WHERE email = ?0")
   UserPrincipal findByEmailAddress(String emailAddress);   

   @Query("SELECT * FROM UserPrincipalByUsername WHERE username= ?0")
   UserPrincipal findByUsername(String username);   
}

It can be noted that two separate interfaces can be used to deal with each table alone, but still, I need to have some logic to maintain the consistency at some point.

I am using Cassandra 2.0.11, CQL spec 3.1.1, Spring data Cassandra 1.3.2 and Spring boot 1.3.1


Answer:

The only procedure I found to solve this is, as mentioned in the question, to use two separate interfaces to deal with each table alone, I have added a wrapper class to use both of them to save using one call, but this dosen't guarantee consistency all the time (in a case of server/system failure for example), but this is ok in my specific application.

Question:

Please tell me whether I can use Spring data 1.4 with cassandra 3.5 ? if not then suggest which ORM I can use ?

Thanks


Answer:

No you can't

Spring Data Cassandra version 1.4.1 is pulling Cassandra version 2.1.11 (https://repo1.maven.org/maven2/org/springframework/data/spring-data-cassandra-parent/1.4.1.RELEASE/spring-data-cassandra-parent-1.4.1.RELEASE.pom)

If you want an object mapper for Cassandra 3.5 :

Edit: Spring Data Cassandra 1.5 has just been released and now support Java driver 3.1.3

Question:

Is there any way to connect a Spring Boot application to two different Cassandra data sources by using Spring Boot and Spring Data?

I tried to configure 2 different data sources but Spring Boot chooses the first one and ignores the other.

Thank you


Answer:

Spring Boot supports out of the box only singleton data sources and it configures a single Session with a single CassandraTemplate.

Since Spring Data 2.0, CassandraTemplate supports a SessionFactory that can route calls to different Cassandra Sessions. That's something you need to configure yourself:

@Configuration
class MyConfig {

  @Bean
  CassandraTemplate cassandraTemplate(CassandraConverter converter) {
    SessionFactory factory = …;
    return new CassandraTemplate(factory, converter);
  }
}

You might want to take a look into AbstractRoutingSessionFactory for building your own Session router.

Question:

If I annotate a class with annotations from com.datastax.driver.mapping.annotations, I can write a test along the lines of:

MappingManager manager = new MappingManager(session);
Mapper<MyAnnotatedClass> mapper = manager.mapper(MyAnnotatedClass.class);

MyAnnotatedClass entity = ...;

RegularStatement saveQuery = (RegularStatement) mapper.saveQuery(entity);

assertEquals("...", saveQuery.getQueryString());

However I have entity classes annotated with org.springframework.data.cassandra.mapping annotations. I've been unable to find an Spring equivalent to Mapper's saveQuery(), getQuery() and deleteQuery().

How can I write (ideally lightweight at runtime) tests regarding the CQL generated from Spring Data Cassandra-annotated entity classes?


Answer:

With Spring Data for Apache Cassandra 1.5, you can write the following code to create Statements:

CassandraTemplate template = …

Person person = …

CqlIdentifier tableName = template.getTableName(Person.class);

Insert insert = CassandraTemplate.createInsertQuery(tableName.toCql(), person, 
                    new WriteOptions(), template.getConverter());

Delete delete = CassandraTemplate.createDeleteQuery(tableName.toCql(), person,
                    new WriteOptions(), template.getConverter());

Update update = CassandraTemplate.createUpdateQuery(tableName.toCql(), person, 
                    new WriteOptions(), template.getConverter());

Note: Spring Data for Apache Cassandra 1.5 uses BATCH statements for inserts, that's going to change with the release 2.0.

CassandraTemplate and CassandraConverter are the key classes involved in query creation for version 1.5. In Spring Data 2.0, things are going to change a bit as 2.0 is going to ship with additional Query and Update types for partial entity updates. So query creation moves from CassandraTemplate.create…Query(…)to QueryUtils.create…Query(…).

Question:

The answer Cassandra Optimistic Locking but the question describes that Optimistic Locking (versioning) exists in Cassandra.

My question is how to do it in Spring Boot?


Answer:

The lightweight transactions in Cassandra Spring Data are supported via InsertOptions class or UpdateOptions - you create an instance of it via corresponding builder, for example, InsertOptions.InsertOptionsBuilder, and then pass instance of it to the corresponding operation insert or update.

Result of operation is obtained from instance of WriteResult class that is returned by insert/update by calling the .wasApplied function.

More detailed information could be found in the documentation.

Question:

I'm setting up new reactive cassandra rest service on spring, and then there is "default" field like is_deleted, is_active, storeid, etc, on all tables.

Since it's assumed that is_deleted needed on where query. It's created as one of the composite PK, so that the data search would be faster.

Problem is because of that, the primary key is very fat and the search query becomes so long since it need to mention all default key.

Is it a good practice to have such fat composite PK on cassandra?

@PrimaryKeyColumn(name = BaseCassandraFields.STORE_ID, type = PrimaryKeyType.PARTITIONED)
  protected String storeId;

@PrimaryKeyColumn(name = BaseCassandraFields.IS_DELETED, type = 
PrimaryKeyType.PARTITIONED)
  protected Boolean isDeleted = false;

Example of table

Also here is the DDL


Answer:

It doesn't affect Cassandra internally, but if you don't need all those keys, you're putting stress on the development part needlessly. I personally find it weird to have booleans in the PK, but your use case might justify it. You could argue that maybe Cassandra has a bit of extra overhead calculating the hash for the key due to more columns, but I doubt that's significant since hash functions usually have high performance.

Question:

I have the following user info object mapped to a table "user_info" in my keyspace "data_collection". I have created the "user_info" table in my cassandra database. I am using spring data cassandra for connecting to cassandra database from JAVA and the spring annotations as below.

@Table(name="user_info",keyspace="data_collection", caseSensitiveKeyspace = false,caseSensitiveTable = false)

public class UserInfo {
    @PartitionKey
    private UUID id;

    @PrimaryKeyColumn
    private String email;

    private int phone;

    private String name;

    public String getName() {
        return name;
    }
    public void setName(String name) {
        this.name = name;
    }
    public String getEmail() {
        return email;
    }
    public void setEmail(String email) {
        this.email = email;
    }
    public int getPhone() {
        return phone;
    }
    public void setPhone(int phone) {
        this.phone = phone;
    }
}

I am using the following code to insert a record into my "user_info" table.

    @Autowired
    CassandraTemplate cassandraTemplate;

    public void saveUserInfo(UserInfo userInfo){
        logger.debug("userInfo "+new Gson().toJson(userInfo));
        String email = userInfo.getEmail();
        Select select = QueryBuilder.select().from("user_info");
        select.where(QueryBuilder.eq("email", email));
        logger.debug("Query "+select.toString());
        UserInfo existingUser = cassandraTemplate.selectOne(select, UserInfo.class);
       if(existingUser!=null){
            cassandraTemplate.update(userInfo);
       }
       else{
            cassandraTemplate.insert(userInfo);
       }

    }

My selectOne is working properly whereas during insert I am getting the following exception. I have clearly mapped the UserInfo.java class to the table name "user_info" using annotation above. I don't know why the insert is trying to happen to the table "userinfo".

org.springframework.cassandra.support.exception.CassandraInvalidQueryException: unconfigured columnfamily userinfo; nested exception is com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured columnfamily userinfo
    at org.springframework.cassandra.support.CassandraExceptionTranslator.translateExceptionIfPossible(CassandraExceptionTranslator.java:128)
    at org.springframework.cassandra.core.CqlTemplate.potentiallyConvertRuntimeException(CqlTemplate.java:946)
    at org.springframework.cassandra.core.CqlTemplate.translateExceptionIfPossible(CqlTemplate.java:930)
    at org.springframework.cassandra.core.CqlTemplate.translateExceptionIfPossible(CqlTemplate.java:912)
    at org.springframework.cassandra.core.CqlTemplate.doExecute(CqlTemplate.java:278)
    at org.springframework.cassandra.core.CqlTemplate.doExecute(CqlTemplate.java:559)
    at org.springframework.cassandra.core.CqlTemplate.execute(CqlTemplate.java:1333)
    at org.springframework.data.cassandra.core.CassandraTemplate.doUpdate(CassandraTemplate.java:895)
    at org.springframework.data.cassandra.core.CassandraTemplate.update(CassandraTemplate.java:537)
    at org.springframework.data.cassandra.core.CassandraTemplate.update(CassandraTemplate.java:532)

Please find below the description of the table in cassandra.

CREATE TABLE user_info (
  name text,
  email text,
  phone int
  PRIMARY KEY ((email))
) WITH
  bloom_filter_fp_chance=0.010000 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.100000 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=0.000000 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};

Quick update : I just tried saving another class Test.java. It was mapped to a table "test_info". I got the following error

org.springframework.cassandra.support.exception.CassandraInvalidQueryException: unconfigured columnfamily test; nested exception is com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured columnfamily test
at org.springframework.cassandra.support.CassandraExceptionTranslator.translateExceptionIfPossible(CassandraExceptionTranslator.java:128)
at org.springframework.cassandra.core.CqlTemplate.potentiallyConvertRuntimeException(CqlTemplate.java:946)
at org.springframework.cassandra.core.CqlTemplate.translateExceptionIfPossible(CqlTemplate.java:930)
at org.springframework.cassandra.core.CqlTemplate.translateExceptionIfPossible(CqlTemplate.java:912)
at org.springframework.cassandra.core.CqlTemplate.doExecute(CqlTemplate.java:278)
at org.springframework.cassandra.core.CqlTemplate.doExecute(CqlTemplate.java:559)
at org.springframework.cassandra.core.CqlTemplate.execute(CqlTemplate.java:1323)
at org.springframework.data.cassandra.core.CassandraTemplate.doInsert(CassandraTemplate.java:708)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:290)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:285)

I am just wondering if my Java class name and the table name in cassandra should always be the same. Because its looking for the columnfamily "test" instead of "test_info" which I have specified in the @Table annotation.

Below is the description of my keyspace

CREATE KEYSPACE data_collection WITH replication = {
  'class': 'SimpleStrategy',
  'replication_factor': '3'
};

EDIT - SOLVED : I found the solution based on the conversation with @pinkpanther. I had imported com.datastax.driver.mapping.annotations.Table instead of org.springframework.data.cassandra.mapping.Table which is why it didn't honor the table name mapping. Thanks for your help.


Answer:

The problem might be with the package import of Table, Spring Data Cassandra needs org.springframework.data.cassandra.mapping.Table. Replace the imported com.datastax.driver.mapping.annotations.Table with it.

Question:

I am trying to get User using cassandraOperations.select(s, User.class) but I am getting below error

Caused by: java.lang.IllegalArgumentException: Can not set boolean field com.rogs.cassandra.User.userStatus to null value

Error is correct as I have null values for userStatus of some users in Cassandra DB, is there anyway to ignore the null while getting results with cassandraOperations.

My User class is here.

@Table
public class User{
    @PrimaryKey
    private String userId;
    private String userName;
    private String userDept;
    private boolean userStatus;
    ....
}

Answer:

Exception you are getting is IllegalArgumentException and not NullPointerException. Here you are trying to set null object in boolean field (true/false). You should change the boolean type to Wrapper type like below.

 private Boolean userStatus;

Question:

So since 1.5, we can map UDT with @UserDefinedType. We are experiencing errors when trying to map one of these :

@UserDefinedType("criteria")
public class Criteria {

  @CassandraType(type = Name.VARCHAR)
  private String cle;

  @CassandraType(type = Name.VARCHAR)
  private String nom;

  @CassandraType(type = Name.VARCHAR)
  private String format;
}

We are getting this error at launch :

org.springframework.dao.InvalidDataAccessApiUsageException: Unknown type [class java.lang.String] for property [cle] in entity [com.laposte.ariane.udt.Critere]; only primitive types and Collections or Maps of primitive types are allowed

To avoid this, we removed @CassandraType annotations, but it doesn't seem right.

What's wrong with our mapping?


Answer:

DataStax' driver reports only Name.TEXT as primitive data type but not Name.VARCHAR via DataType.allPrimitiveTypes(). Spring Data Cassandra uses DataType.allPrimitiveTypes() to resolve name to type mappings. I filed a ticket to add explicit type mappings for these two.

See also:

  • Cassandra: text vs varchar

Question:

I have an entity that is grabbed from Cassandra by a repository. In it are some custom fields that I want set when certain managed fields are set by Spring/Cassandra.

But when I try to put the primary key signifier on the getter method (similar to JPA) it doesn't use the methods. How do I get it to call them when setting the fields?

@Table(name="entity")
public class MyEntity {
    private String calculatedField;
    private CompoundKey pk;

    ...elided...

    @PrimaryKey
    public void setPk(CompoundKey pk) {
        this.pk = pk;

        //do some calculations...
        this.calculatedField = pk.getField() + val;
    }
}

This always leaves calculatedField as null.


Answer:

The AccessType annotation is exactly for that purpose. Your entity should look like this:

@AccessType(Type.PROPERTY) 
public class MyEntity {
    //...
}

Question:

I have three columns in my cassandra table (A,B,C). A -- PARTITIONED key, B -- CLUSTERED key

I want to query on A in "list I will pass" and B -- one value. I don't want to use @query (want to use something like findAll())

Any suggestion ?


Answer:

You have two options for now:

  1. Using CassandraTemplate or CqlTemplate, passing in a Select you built yourself.
  2. Wait until DATACASS-7 is resolved (under review now) and use the repository abstraction with findByAIndAndB(Collection<String> aValues, String bValue). Expect a milestone-release right before August.

Question:

I'm trying to connect to cassandra database using spring data (preferably jpa). I can not find any clera example how to do it, no guide. I found some for MongoDB and Neo4j, but none for cassandra. On mail page of spring there is a mention of coassandra project but none exmaple or guide is provided. Can any one help?


Answer:

Which version of spring-data-cassandra are you using?

For v.1 see http://docs.spring.io/spring-data/cassandra/docs/1.1.0.RC1/reference/html/#cassandra-connectors

I'm using v.2 and also had problems with finding tutorials/examples. But there are test inside the lib itself. See eg. spring-data-cassandra/cassandra/src/test/resources/org/springdata/cassandra/test/integration/config/XmlConfigTest-context.xml - you need to change only few things to make it work with your DB. When cofig is ready you can use CqlOperations to run your queries:

@Autowired
private CqlOperations cassandraTemplate;

cassandraTemplate.buildSaveNewOperation(new Foo("bar")).execute();

And that's basically it :)

Question:

I started a basic Spring project. I use Spring data Cassandra.

My model use the builder pattern. Here's my User class:

@Table("user")
@JsonDeserialize(builder = User.Builder.class)
public class User {

@PrimaryKey
public final String username;
@Column("password")
public final String password;
@Column("firstname")
public final String firstName;
@Column("lastname")
public final String lastName;
@Column("birthdate")
public final Date birthDay;
@Column("emailaddress")
public final String emailAddress;
@Column("organization")
public final String organization;
@Column("locality")
public final String locality;
@Column("stateprovince")
public final String stateProvince;
@Column("countrycode")
public final String countryCode;
@Column("disabled")
public final boolean disabled;
@Column("registrationdate")
public final Date registrationDate;

private User(Builder builder) {
    username = builder.username;
    password = builder.password;
    firstName = builder.firstName;
    lastName = builder.lastName;
    birthDay = builder.birthDay;
    emailAddress = builder.emailAddress;
    organization = builder.organization;
    locality = builder.locality;
    stateProvince = builder.stateProvince;
    countryCode = builder.countryCode;
    disabled = builder.disabled;
    registrationDate = builder.registrationDate;
}

public static Builder newBuilder() {
    return new Builder();
}

public static Builder newBuilder(User copy) {
    Builder builder = new Builder();
    builder.username = copy.username;
    builder.password = copy.password;
    builder.firstName = copy.firstName;
    builder.lastName = copy.lastName;
    builder.birthDay = copy.birthDay;
    builder.emailAddress = copy.emailAddress;
    builder.organization = copy.organization;
    builder.locality = copy.locality;
    builder.stateProvince = copy.stateProvince;
    builder.countryCode = copy.countryCode;
    builder.disabled = copy.disabled;
    builder.registrationDate = copy.registrationDate;
    return builder;
}

public static final class Builder {
    private String username;
    private String password;
    private String firstName;
    private String lastName;
    private Date birthDay;
    private String emailAddress;
    private String organization;
    private String locality;
    private String stateProvince;
    private String countryCode;
    private boolean disabled;
    private Date registrationDate;

    private Builder() {
    }

    public static Builder newStub() {
        return new Builder()
                .withUsername("stub")
                .withEmailAddress("stub@unittest.com")
                .withPassword("dontforgettests")
                .withCountryCode("9")
                .withFirstName("Someone")
                .withLastName("Overarainbow");
    }

    public Builder withUsername(String val) {
        username = val;
        return this;
    }
     //etc
}

I have no problem on the controller side to serialize the json from a POST request to User object. There's also no problem when I call the repo to save the object, but when I do :

User getByUsername(String username);

Then I get the following error: No property builder found on entity class com.project.model.user.User to bind constructor parameter

How can I solve this issue?


Answer:

To create Spring Data entities, there are mandatory elements that should be written in your POJO, you should have full arguments constructor, getters and setters. Also, don't hesitate to add the default constructor and to override toString with equals and hashCode methods. This is an example of an Entity class using Spring Data Cassandra terminology:

@Table
public class Person {

  @PrimaryKey
  private final String id;

  private final String name;
  private final int age;

  public Person(String id, String name, int age) {
    this.id = id;
    this.name = name;
    this.age = age;
  }

  public String getId() {
    return id;
  }

  public String getName() {
    return name;
  }

  public int getAge() {
    return age;
  }

  @Override
  public String toString() {
    return String.format("{ @type = %1$s, id = %2$s, name = %3$s, age = %4$d }",
      getClass().getName(), getId(), getName(), getAge());
  }
}

Question:

I want to use CassandraBatchTemplate's withTimestamp to insert client side timestamp like USING TIMESTAMP clause in the CQL. here is my code:

  @Bean
  public DseSession dseSession(DseCluster dseCluster) {
    return dseCluster.connect(keyspace);
  }

  @Bean
  public CassandraOperations cassandraTemplate(DseSession session) {
    return new CassandraTemplate(session);
  }

  @Bean
  public CassandraBatchOperations cassdraBatchTemplate(CassandraOperations cassandraTemplate) {
    return new CassandraBatchTemplate(cassandraTemplate);
  }

when compiled it complained cannot find CassandraBatchTemplate even though i can see it in spring-data-cassandra source code. one thing i noticed is that CassandraBatchTemplate is default implementation of interface CassandraBatchOperations, thus no 'public' is applied to CassandraBatchTemplate class:

class CassandraBatchTemplate implements CassandraBatchOperations {...}

if the class is not public then I cannot create an instance of it by 'new'. how to work around? I'm using spring-data-cassandra 2.1.10.RELEASE and dse-java-driver-core 1.8.2


Answer:

CassandraBatchTemplate isn't public because it has a very limited lifecycle. It isn't intended to be used as @Bean because it is only valid for a single execution.

Instead, obtain CassandraBatchOperations through CassandraOperations.batchOps().

Question:

I have a 3 node Cassandra cluster with Replication factor as 2 and read-write consistency set to QUORUM. We are using Spring data Cassandra. All infrastructure is deployed using Kubernetes.

Now in normal use case many records gets inserted to Cassandra table. Then we try to modify/update one of the record using save method of repo, like below:

ChunkMeta tmpRec = chunkMetaRepository.save(chunkMeta);

After execution of above statement we never see any exception or error. But still this update fails intermittently. That is when we check the record in the DB sometime it gets updated successfully where as other time it fails. Also in the above query when we print tmpRec it contains the updated and correct value. Still in the DB these updated values don't get reflected.

We check the the Cassandra transport TRACE logs on all nodes and found that our queries are getting logged there and are being executed also.

Now another weird observation is all of this works if I am using a single Cassandra node (in Kubernetes) or if we deploy above infra using Ansible (even works for 3 nodes for Ansible).

It looks some issue is specifically with the Kubernetes 3 node deployment of Cassandra. Primarily looks like replication among nodes causing this.

Contents of Docker file:

FROM ubuntu:16.04

RUN apt-get update && apt-get install -y python sudo lsof vim dnsutils net-tools && apt-get clean && \
    addgroup testuser && useradd -g testuser testuser && usermod --password testuser testuser;

RUN mkdir -p /opt/test && \
    mkdir -p /opt/test/data;

ADD jre8.tar.gz /opt/test/
ADD apache-cassandra-3.11.0-bin.tar.gz /opt/test/

RUN chmod 755 -R /opt/test/jre && \
    ln -s /opt/test/jre/bin/java /usr/bin/java && \
    mv /opt/test/apache-cassandra* /opt/test/cassandra;

RUN mkdir -p /opt/test/cassandra/logs;

ENV JAVA_HOME /opt/test/jre
RUN export JAVA_HOME

COPY version.txt /opt/test/cassandra/version.txt

WORKDIR /opt/test/cassandra/bin/

RUN mkdir -p /opt/test/data/saved_caches && \
    mkdir -p /opt/test/data/commitlog && \
    mkdir -p /opt/test/data/hints && \
    chown -R testuser:testuser /opt/test/data && \
    chown -R testuser:testuser /opt/test;

USER testuser

CMD cp /etc/cassandra/cassandra.yml ../conf/conf.yml && perl -p -e 's/\$\{([^}]+)\}/defined $ENV{$1} ? $ENV{$1} : $&/eg; s/\$\{([^}]+)\}//eg' ../conf/conf.yml > ../conf/cassandra.yaml && rm ../conf/conf.yml && ./cassandra -f

Please note conf.yml is basically cassandra.yml file having properties related to Cassandra.


Answer:

Thanks guys and sorry for delayed reply.

I found the root cause for this behavior. Actually much later I found out that the Cassandra relies (for column timestamp) on client timestamp. Client means the different pod's (instances of microservice). In my case there were 3 containers running on different hosts. Finally after a lot of struggle and research I figured out that there was slight clock drift among these containers running on different hosts. Later I installed the NTP server on all these hosts which helped us keeping the time in sync across these nodes. Similar to NTP you can also install any time syn server/utility and get away from the problem of nodes clock drift issue.

Though this helped me and will also help other in keeping node clock in sync. But in certain corner cases I found based on the sync time configured with NTP server there could be instances where you can find 2-3 seconds drift across nodes (as in my case the NTP sync time was 2 seconds). Which can be further reduced by reducing the sync time across nodes.

But eventually the root cause was only the clock drift across nodes running microservices.

Question:

I have a spring-boot application in which I want to create Cassandra counter tables if not exist. I am using the repository for the same.

UserPoints POJO:

/**
 * 
 * @author Prakash Pandey 23-Nov-2017
 *
 */
@Table("user_points")
public class UserPoints {

    @PrimaryKeyColumn(name = "app_id", ordinal = 0, type = PrimaryKeyType.PARTITIONED)
    private long appId;

    @PrimaryKeyColumn(name = "user_name", ordinal = 0, type = PrimaryKeyType.CLUSTERED)
    private String userName;

    @Column(value = "points")
    long points;


    public long getAppId() {
        return appId;
    }

    public void setAppId(long appId) {
        this.appId = appId;
    }

    public long getPoints() {
        return points;
    }

    public void setPoints(long points) {
        this.points = points;
    }

    public String getUserName() {
        return userName;
    }

    public void setUserName(String userName) {
        this.userName = userName;
    }

    @Override
    public String toString() {
        return "UserPoints [appId=" + appId + ", userName=" + userName + ", points=" + points + "]";
    }
}

UserPointRepository POJO:

@Repository
public interface UserPointRepository extends CassandraRepository<UserPoints> {

}

The table was created successfully in Cassandra database with the below definition :

CREATE TABLE user_points (
  app_id bigint,
  user_name text,
  points bigint,
  PRIMARY KEY (app_id, user_name)
)

The problem is that data type of points column is bigint, expected is counter data-type.

I have two questions:

  1. How to create a counter(column having counter datatype) table using the repository.
  2. How to update (increment, decrement) a counter column using the repository.

Answer:

TL; DR.

counters are not supported through repositories, and I'm not sure there is a good way to support them.

Explanation

Spring Data Repositories are designed to save/update an entity with the data from the actual object passed to save(Object). No database-side modifiers are applied. A counter requires server-side operations (increment, decrement) which can't be expressed through a save(…) method.

Question:

I am trying to write an example using Spring Data and connect to cassandra (topic http://docs.spring.io/spring-data/cassandra/docs/1.0.2.RELEASE/reference/html/cassandra.core.html)

Classes:

public class CassandraApp {

    private static final Logger LOG = LoggerFactory.getLogger(CassandraApp.class);

    private static Cluster cluster;
    private static Session session;

    public static void main(String[] args) {

        try {

            cluster = Cluster.builder().addContactPoints(InetAddress.getLocalHost()).build();

            session = cluster.connect("mykeyspace");

            CassandraOperations cassandraOps = new CassandraTemplate(session);

            cassandraOps.insert(new Person("1234567890", "David", 40));

            Select s = QueryBuilder.select().from("person");
            s.where(QueryBuilder.eq("id", "1234567890"));

            LOG.info(cassandraOps.queryForObject(s, Person.class).getId());

            cassandraOps.truncate("person");

        } catch (UnknownHostException e) {
            e.printStackTrace();
        }

    }
}

@Table
public class Person {

    @PrimaryKey
    private String id;

    private String name;
    private int age;

    public Person(String id, String name, int age) {
        this.id = id;
        this.name = name;
        this.age = age;
    }

    public String getId() {
        return this.id;
    }

    public String getName() {
        return name;
    }

    public int getAge() {
        return age;
    }

    @Override
    public String toString() {
        return "Person [id=" + id + ", name=" + name + ", age=" + age + "]";
    }

}

And I get this exception:

log4j:WARN No appenders could be found for logger (com.datastax.driver.core.Cluster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.ClassCastException: java.lang.String cannot be cast to main.java.example2.Person
at main.java.example2.CassandraApp.main(CassandraApp.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

cassandraOps.queryForObject(s, Person.class) returns a String value, but I need to get a class.


Answer:

It seems, there is an issue with the cqlTemplate's arbitrary object mapping.

Here is how I have fixed it.

   Person pnObj = cassandraOps.selectOne(s, Person.class);
   LOG.info(pnObj.getId());

Question:

I am using spring-data-cassandra and have a table such that its primary key is ((int_col1,int_col2),bigint_col1,bigint_col2). int_col1&int_col2 are the partition keys bigint_col1 & bigint_col2 are the cluster keys.

How important is it to implement hashcode & equals method for my class. What should be the hashcode implementation of my above @PrimaryKeyClass


Answer:

// your class's constructor should have exactly four arguments
// and ensure that each of these four fields are non-null

@Override
public int hashCode() {
  return 37
    ^ int_col1.hashCode()
    ^ int_col2.hashCode()
    ^ bigint_col1.hashCode()
    ^ bigint_col2.hashCode();
}

@Override
public boolean equals(Object that) {
  if (this == that) {
    return true;
  }
  if (that == null) {
    return false;
  }
  if (!(that instanceof YourPrimaryKeyClass)) {
    return false;
  }
  YourPrimaryKeyClass other = (YourPrimaryKeyClass) that;
  return this.int_col1.equals(other.int_col1)
    && this.int_col2.equals(other.int_col2)
    && bigint_col1.equals(other.bigint_col1)
    && bigint_col2.equals(other.bigint_col2);
}

Question:

cluster = Cluster.builder().addContactPoints(host).build();  
Session session  = cluster.connect(keyspace);
CassandraOperations cassandraOps = new CassandraTemplate(session);   

I am looking for exceptions other than nullPointerException and noHostAvailableException?


Answer:

Everything throws DataAccessException. That is one of the fundamental points of Spring Data. We translate the C* exceptions into predictable, standard exceptions thrown by all Spring Data modules.

http://docs.spring.io/spring/docs/3.2.x/javadoc-api/org/springframework/dao/DataAccessException.html?is-external=true

Question:

I have problem with Spring Boot + cassandra web application. It started to appear with data grown, and now it's super common scenario.

All queries sometimes doesn't work, CassandraRepository returns null. And few seconds later it works again, and next few seconds it's not working again. So web application constantly returns 200 or 404 response. The same query works in cqlsh all time.

I'm using:

  • spring-boot-starter-parent#2.1.3
  • spring-boot-starter-data-cassandra#2.1.3
  • Cassandra 3.11.3 (multiple clusters)

Data structure:

CREATE KEYSPACE data WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '2'}  AND durable_writes = true;

CREATE TABLE data.image (
    hash text PRIMARY KEY,
    image blob
) WITH bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

To configure Cassandra connection I use:

@Configuration
@EnableCassandraRepositories(basePackages = "...path...")
public class CassandraConfig extends AbstractCassandraConfiguration {

    @Bean
    public CassandraClusterFactoryBean cluster() {
        CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
        cluster.setContactPoints("127.0.0.1");
        cluster.setPort(91234);
        return cluster;
    }

To retrive data I'm using CassandraRepository with @Query(("select * from image where id = ?0")) QueryAnnotation. Retrieved data contains image blobs.

I think that read timeout is a problem here, servers have slow HDD disks and not so powerful CPUs. But how can I overwrite this settings with spring boot starter?

I've tried to use

SocketOptions so = new SocketOptions();
so.setConnectTimeoutMillis(10000);
so.setReadTimeoutMillis(20000);
cluster.setSocketOptions(so);

with no success.

What else can I make to have stable working solution?


Answer:

nodetool repair helped -> after a few days everything started to work as expected. Conclusion: too much write do database.

Question:

I have problem with simple Spring Boot application. I'm using:

  • spring-boot-starter-parent
  • spring-boot-starter-data-cassandra
  • Cassandra 3.11.3 (both on CentOS 7 server and local Mac OS) (query by cqlsh works)

I followed simple guide from https://www.baeldung.com/spring-data-cassandra-tutorial and no matter if Cassandra is running or not running I gets an one error while launching:

2018-11-25 09:12:34.581 ERROR 83213 --- [main] o.s.boot.SpringApplication: Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'session' defined in class path resource [some/project/path/CassandraConfig.class]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: com/codahale/metrics/JmxReporte

My config class:

@Configuration
@EnableCassandraRepositories(basePackages = "packagename")
public class CassandraConfig extends AbstractCassandraConfiguration {

    @Override
    protected String getKeyspaceName() {
        return "test";
    }

    @Bean
    public CassandraClusterFactoryBean cluster() {
        CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
        cluster.setContactPoints("127.0.0.1");
        cluster.setPort(9042);
        return cluster;
    }

    @Override
    public SchemaAction getSchemaAction() {
        return SchemaAction.CREATE_IF_NOT_EXISTS;
    }

    @Override
    public String[] getEntityBasePackages() {
        return new String[]{"packagename"};
    }
}

Entity:

@Table
public class Image implements Serializable {

    @PrimaryKeyColumn(
        name = "key",
        ordinal = 0,
        type = PrimaryKeyType.PARTITIONED)
    private UUID id;

    @Column
    private Blob object;
}

Repository:

@Repository
public interface ImagesRepository extends CrudRepository<Image, UUID> {
}

Cassandra structure:

CREATE TABLE images (
    key text,
    object blob,
    PRIMARY KEY (key)
);

Also https://github.com/springframeworkguru/spring-boot-cassandra-example gives me an error on creating the bean too.


Answer:

You are missing Dropwizard dependency:

You need the metrics-core library as a dependency:

<dependencies>
<dependency>
    <groupId>io.dropwizard.metrics</groupId>
    <artifactId>metrics-core</artifactId>
    <version>${metrics.version}</version>
</dependency>

Note

Make sure you have a metrics.version property declared in your POM with the current version, which is 3.1.0.

Question:

Spring Data Cassandra 1.5.0 comes with a streaming API in CassandraTemplate. I'm using spring-data-cassandra 1.5.1. I have a code like:

    String tableName = cassandraTemplate.getTableName(MyEntity.class).toCql();
    Select select = QueryBuilder.select()
            .all()
            .from(tableName);
    // In real world, WHERE statement is much more complex
    select.where(eq(ENTITY_FIELD_NAME, expectedField)) 
    List<MyEntity> result = cassandraTemplate.select(select, MyEntity.class);

and want to replace this code with iterable or Java 8 Stream in order to avoid fetching a big list of results to memory at once.

What I'm looking for is a method signature like CassandraOperations.stream(Select query, Class<T> entityClass), but it is not available.

The only available method in CassandraOperations accepts query string: stream(String query, Class<T> entityClass). I tried to pass here a string generated by Select like

cassandraTemplate.stream(select.getQueryString(), MyEntity.class)

But that fails with InvalidQueryException: Invalid amount of bind variables, because getQueryString() returns query with question mark placeholders instead of variables.

I see 3 options to get what I want, but every option looks bad:

Is there any better way to stream selection results?

Thanks.


Answer:

So, as of now the answer on my question is to wait until stable version of spring-data-cassandra 2.0.0 comes out:

https://github.com/spring-projects/spring-data-cassandra/blob/2.0.x/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/CassandraTemplate.java#L208

Question:

Currently when I create database/tables in Cassandra I have to run scripts before fetching data. But now I want to create same Database for each tenant in multi tenant architecture. Do I need to create database explicitly for each tenant or is there a way to create them on runtime?

Thanks in advance...


Answer:

You'll have to do that explicitly.

Also having database per tenant is an expensive strategy in C* (if you have a lot of tenants), this requires C* to allocate an additional memtable per each.

I'd recommend add a tenant id as a part of your row key. There's a nice video by BlackRock guys describing what they went through in that case.

Question:

I am beginning to touch Cassandra, but I am in trouble because I can not do JOIN. Since JOIN can not be done with CQL as it is, I thought about looking for alternative means and joining it on the Java application side.

Specifically, I used @OneToMany and I tried joining Entities, but the following error appears.

Is there any good solution?

■Project structure

SpringBoot + Spring Data for Apache Cassandra

Version:

  • Spring Boot :: (v1.3.5.RELEASE)
  • spring-data-cassandra-1.3.5.RELEASE
  • cassandra 2.1.16

■Error log

com.datastax.driver.core.exceptions.InvalidQueryException: Unknown identifier emp
at com.datastax.driver.core.Responses$Error.asException(Responses.java:102) ~[cassandra-driver-core-2.1.9.jar:na]
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:149) ~[cassandra-driver-core-2.1.9.jar:na]
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:183) ~[cassandra-driver-core-2.1.9.jar:na]
at com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:44) ~[cassandra-driver-core-2.1.9.jar:na]
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:751) ~[cassandra-driver-core-2.1.9.jar:na]

■ Source: Controller

@RequestMapping(value = "/DepartmentsCassandra/form", method = RequestMethod.POST)
@Transactional(readOnly=false)
public ModelAndView form(
        @RequestParam("department_id") int department_id,
        @RequestParam("department_name") String department_name,
        ModelAndView mav){
    Departments mydata = new Departments();
    mydata.setDepartment_id(department_id);
    mydata.setDepartment_name(department_name);
    repository.save(mydata);// ← Error occurred !!!
    return new ModelAndView("redirect:/DepartmentsCassandra");
}

■ Source: Entity: Departments

package com.example.cassandra.entity;

import java.util.ArrayList;
import java.util.List;
import javax.persistence.FetchType;
import javax.persistence.JoinColumn;
import javax.persistence.OneToMany;
import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.Column;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;
@Table(value="departments")
public class Departments {

@PrimaryKeyColumn(name = "department_id",ordinal = 1,type = PrimaryKeyType.PARTITIONED)
private int department_id;

@Column(value = "department_name")
private String department_name;

public Departments(int department_id,String department_name){
    this.department_id = department_id;
    this.department_name = department_name;
}

@OneToMany(fetch=FetchType.EAGER)
@JoinColumn(name="department_id",insertable=false,updatable=false)
private List<Employees> emp = new ArrayList<Employees>();

Answer:

Sooo, "client side join" is a general anti-pattern using Cassandra, since you do two queries instead of one every time and hence lose the performance gain. The way to go is creating a broad table for each query - including all "joined" data. So in your case, create a table employee_by_department or something alike. Check out the introductory courses on datastax.com - they are great :-)

Question:

I created two tables in CQL to Cassandra. I use org.springframework.data.cassandra.repository.CassandraRepository.

In one table (memobox), repository.findAll(); can be successfully acquired, In the other table (departments), repository.findAll(); can not be successfully obtained, and the following error is output.

Is there any good advice?

■Project structure

SpringBoot + Spring Data for Apache Cassandra

Version:

  • Spring Boot :: (v1.3.5.RELEASE)
  • spring-data-cassandra-1.3.5.RELEASE
  • cassandra 2.1.16

■Error log

java.lang.IllegalArgumentException: argument type mismatch
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_74]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_74]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_74]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_74]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:147) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.data.convert.ReflectionEntityInstantiator.createInstance(ReflectionEntityInstantiator.java:76) ~[spring-data-commons-1.11.4.RELEASE.jar:na]
at org.springframework.data.convert.ClassGeneratingEntityInstantiator.createInstance(ClassGeneratingEntityInstantiator.java:83) ~[spring-data-commons-1.11.4.RELEASE.jar:na]
at org.springframework.data.cassandra.convert.MappingCassandraConverter.readEntityFromRow(MappingCassandraConverter.java:133) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at org.springframework.data.cassandra.convert.MappingCassandraConverter.readRow(MappingCassandraConverter.java:115) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at org.springframework.data.cassandra.convert.MappingCassandraConverter.read(MappingCassandraConverter.java:200) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at org.springframework.data.cassandra.core.CassandraConverterRowCallback.doWith(CassandraConverterRowCallback.java:47) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at org.springframework.data.cassandra.core.CassandraTemplate.select(CassandraTemplate.java:565) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at org.springframework.data.cassandra.core.CassandraTemplate.select(CassandraTemplate.java:328) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at org.springframework.data.cassandra.core.CassandraTemplate.selectAll(CassandraTemplate.java:311) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at org.springframework.data.cassandra.repository.support.SimpleCassandraRepository.findAll(SimpleCassandraRepository.java:104) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at org.springframework.data.cassandra.repository.support.SimpleCassandraRepository.findAll(SimpleCassandraRepository.java:36) ~[spring-data-cassandra-1.3.5.RELEASE.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_74]

■Successful table:describe memobox

cqlsh:keyspacea> describe memobox;
CREATE TABLE keyspacea.memobox (
id timeuuid PRIMARY KEY,
date timestamp,
memo text,
name text
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
CREATE INDEX memobox_memo ON keyspacea.memobox (memo);
CREATE INDEX memobox_name ON keyspacea.memobox (name);

■Unsuccessful table:describe departments

cqlsh:keyspacea> describe departments;
CREATE TABLE keyspacea.departments (
department_id varint PRIMARY KEY,
department_name text
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

■Source is below

https://github.com/hidetarou2013/SpringBootDBSample

branch is feature/cassandra

■Entity:MemoBox

package com.example.cassandra.entity;
import java.util.Date;
import java.util.UUID;
import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.Column;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;
import com.datastax.driver.core.utils.UUIDs;

@Table(value = "memobox")
public class MemoBox {

@PrimaryKeyColumn(name = "id",ordinal = 1,type = PrimaryKeyType.PARTITIONED)
private UUID id = UUIDs.timeBased();

@Column(value = "name")
private String name;

@Column(value = "memo")
private String memo;

@Column(value = "date")
private Date date;

■Entity:Departments

package com.example.cassandra.entity;

import java.util.ArrayList;
import java.util.List;
import javax.persistence.FetchType;
import javax.persistence.JoinColumn;
import javax.persistence.OneToMany;
import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.Column;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;
@Table(value="departments")
public class Departments {

@PrimaryKeyColumn(name = "department_id",ordinal = 1,type = PrimaryKeyType.PARTITIONED)
private int department_id;

@Column(value = "department_name")
private String department_name;

public Departments(int department_id,String department_name){
    this.department_id = department_id;
    this.department_name = department_name;
}

@OneToMany(fetch=FetchType.EAGER)
@JoinColumn(name="department_id",insertable=false,updatable=false)
private List<Employees> emp = new ArrayList<Employees>();

Answer:

Your Department constructor is having the problem. Remove the Department constructor.

Question:

Spring data cassandra project has org.springframework.data.cassandra.repository.support.SimpleCassandraRepository class for resolving cassandra repositories.

What I want:

  1. Create "general" interface for exmaple AsyncCassandraRepository like org.springframework.data.cassandra.repository.TypedIdCassandraRepository but with asynchronous methods.
  2. Create implementation for that interface - like org.springframework.data.cassandra.repository.support.SimpleCassandraRepository, but with asynchronous methods.
  3. Then create new asynchronous repositories for other domain entities just extends from async inteface e.g. CustomerRepository extends AsyncCassandraRepository. So no new implementation will be needed.

So the idea is to create new async interface and implementation and used it everywhere. SimpleCassandraRepository itseft is really simple, so no problem to create new async version.

The real problem is to "register" new async interface and implementation into spring data cassandra depths. How can I do that?


Answer:

There is a jira task in spring data cassandra project:

Question:

I have a Java app on Spring Boot with Cassandra DB, where I'm writing to DB Person entities. Each row of person in DB must be deleted when get 5 minute old, so the concept is easy:

Some person is added to DB with timestamp and this person must be removed after exactly 5 minutes.

The only idea that comes to mind is setting Spring Scheduler which runs every second and checks every row if it's expired and if it is, then it is deleted.


Answer:

Since you are using Cassandra as a DB you could leverage the Cassandra TTL feature.

During data insertion, you have to specify 'ttl' value in seconds. 'ttl' value is the time to live value for the data. After that particular amount of time, data will be automatically removed.

TTL syntax in cql would be like

INSERT INTO person (name, age) VALUES ('ExampleName', '39') USING TTL 300;