Uploaded image for project: 'controller'
  1. controller
  2. CONTROLLER-1449

Reads of medium to large size data sets fails on nodes with replica shards

    XMLWordPrintable

Details

    • Bug
    • Status: Resolved
    • Resolution: Done
    • Beryllium
    • None
    • clustering
    • None
    • Operating System: All
      Platform: All

    • 4627
    • Highest

    Description

      when running dsbenchmark on a node with replica shard, the READ operation on a medium-sized list fails.

      I try to many reads (items in a list) on a remote leader (I.e. in my 3-node test cluster I issued the read on 10.194.126.98 or 10.194.126..99, which are replica nodes), the operation never finishes. The dsbenchmark READ test dumps a 10k-element list into the data store, and then tries to read the elements one by one. A list of 1,000 items works fine. A list of 10,000 items does not work. Even with the 10k-item list, I can see the list items through RESTCONF in the leader node (issue a REST read request on 10.194.126.97), but trying to do the programmatic read from a remote node does not work.

      To reproduce, install dsbenchmark and run the attached script first on the leader node and then on replica nodes with the following command line:

      ./dsbenchmark.py --host 10.194.126.98 --txtype SIMPLE-TX --inner 1 --optype READ --warmup 1 --runs 3 --total 10000 --ops 10 100

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            tpantelis Tom Pantelis
            jmedved@cisco.com Jan Medved
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: