Skip to content

Events added to in-memory queue are not being raised after ContinueAsNew called. #676  #331

Open
@tabasco7879

Description

@tabasco7879

🐛 Describe the bug
I had the exact same problem as described in the c# library (Azure/azure-functions-durable-extension#676): only the first event is processed when raising event in a quick succession.

🤔 Expected behavior
I was expecting continue_as_new to resume the orchestrator_function and to pick the next event in the queue.

Steps to reproduce

What Durable Functions patterns are you using, if any?

I was testing the singleton orchestrator patterns. Instead of directly invoking the orchestrator, I use raise_event to communicate with the orchestrator. The orchestrator listens to the incoming event with wait_for_external_event and triggers the processing.

Any minimal reproducer we can use?

The orchestrator function

def orchestrator_function(context: df.DurableOrchestrationContext):
    logging.warning("orchestrator starts")
    result = yield context.wait_for_external_event('wait_for_request')
    req = json.loads(result)
    yield context.call_activity('Hello1', json.dumps(req))
    yield context.call_activity('Hello1', json.dumps(req))
    yield context.call_activity('Hello1', json.dumps(req))
    logging.warning("orchestrator ends", req)
    context.continue_as_new(None)

The Hello1 activity function uses time.sleep(5) to simulate extended processing.

The HttpStarter

async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
    client = df.DurableOrchestrationClient(starter)
    req_body = req.get_json()
    function_name = "orchestrator_function"

    h = hashlib.sha256()
    h.update(str.encode(req_body['endpoint']))
    instance_id = h.hexdigest()
    existing_instance = await client.get_status(instance_id)

    logging.warning(req_body)

    logging.warning(f"Start orchestration with ID = '{instance_id}'.")
    if existing_instance.runtime_status in [
            df.OrchestrationRuntimeStatus.Completed,
            df.OrchestrationRuntimeStatus.Failed,
            df.OrchestrationRuntimeStatus.Terminated,
            None]:
        instance_id = await client.start_new(function_name, instance_id, None)
        logging.warning(f"Started orchestration with ID = '{instance_id}'.")

    await client.raise_event(instance_id, 'wait_for_request', req_body)
    return client.create_check_status_response(req, instance_id)

When sending two HTTP requests to the HttpStarter consecutively (not simultaneously but with a bit gap), the orchestrator gets the first message and finishes the processing, but never picks up the second message.

Are you running this locally or on Azure?

I have tested both locally and on Azure. I currently work on a prototype that uses the singleton durable function to control the traffic to endpoints. Some endpoints may allow concurrent traffic and some may not.

If deployed to Azure

We have access to a lot of telemetry that can help with investigations. Please provide as much of the following information as you can to help us investigate!

  • Timeframe issue observed: 2021-11-02
  • Function App name: tao-func-test1
  • Function name(s): adapter_controller (orchestrator), http_trigger (HttpStarter)
  • Azure region: US east
  • Orchestration instance ID(s): 5e0c90d2d434f24b1ec3df57316653d00de3cc27b2001902cd0983c18e381b78
  • Azure storage account name: taofunctest1

If you don't want to share your Function App or storage account name GitHub, please at least share the orchestration instance ID. Otherwise it's extremely difficult to look up information.

Metadata

Metadata

Assignees

No one assigned

    Labels

    EnhancementFeature requests.P2Priority 2bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions