As I studied an open-source codebase with intent to contribute, I used ChatGPT as a note-taking tool. This is my side of one of our conversations.


AuthResolver.sendMail is injected by Inversify – why?

AuthResolver.ts:

// ====================================
// POST /signup { $gql.mutation('signup') }
// ====================================

@Mutation((returns) => types.SignupOutput)
async signup(@Arg('input') input: types.SignupInput, @Ctx() ctx: ResolverCtx): Promise<typeof types.SignupOutput> {
    const sessionUser = ctx.getSessionUser();
    if (sessionUser.isLoggedIn) {
    return types.ForbiddenError.of({ message: 'must be logged out' });
    }

    try {
    const user = await this.authService.signup(input.username, input.email, input.password);
    const mail = this.mailWriterService.writeConfirmationEmail(user);

    
    await this.sendMail.job.enqueue({ mail });
    this.persistLogin(ctx, user);
    return types.User.of(user);
    } catch (e) {
    this.logger.error(could not signup: error=${e});
    if (e instanceof errors.ValidationError) {
        return types.ValidationError.of(e);
    } else {
        return types.UnknownError.of(e);
    }
    }
}

inversify.config.ts:

container
.bind<SendMail>(TYPES.SendMail)
.to(SendMail)
.inSingletonScope();

So far, to me, DI seems like arbitrarily deciding that properties should be externally added to an object rather than assigned directly to it. Does it have anything in common with Redux on the frontend? It’s only used in this app’s backend.


Below is a case of inversify’s use for something other than dependency injection, unless I misunderstand. It seems to be a traditional import which is inversify.config.ts determines conditionally based on NODE_ENV.

import { Db } from '../db'; 
import { container } from '../inversify.config';
import { TYPES } from '../inversify.constants';
import { generateSchema } from '../resolvers';
import { GraphqlServer } from '../server';

(async () => {
const db = container.get<Db>(TYPES.Db);
await db.init();
const schema = generateSchema();

const server = container.get<GraphqlServer>(TYPES.ApiServer);

server.start(schema);
})();

Can you walk me through inversify.config.ts? It imports every relevant backend class, all of which are decorated with @injectable in their native files. Then, it first appears that classes are pointlessly “bound” to themselves, but the presence of corresponding types (TYPES?) suggests a form of mounting that is essential for classes to make use of Inversify’s features – although types aren’t always used. Ultimately, aside from the conditionals that determine which service to run based on environment (TYPES.Mailer and TYPES.ApiServer), the file looks like a giant barrel file with extra boilerplate.


Let me add this confusing bit, where the same Db is mounted in both if cases.

if (config.NODE_ENV === 'production') {
container
    .bind<Db>(TYPES.Db)
    .to(MikroORMDb)
    .inSingletonScope();
} else {
container
    .bind<Db>(TYPES.Db)
    .to(MikroORMDb)
    .inSingletonScope();
}

What does “dependency” mean in this context? A single necessary module? Are we basically using this tool to abstract constructor syntax and persist individual class instances across the app?


Earlier I asked if this had any similarity to Redux. It seems, at least in the inSingletonScope() use case, it is a form of backend state management, allowing access to global instances across the app.


So in this example, which seems to be the typical implementation, UserRepo is injected into AuthService – since auth services perform database operations on the users table, UserRepo is an obvious dependency – to avoid constructor syntax and ensure that only one UserRepo instance is shared between the services that depend on it?


Explain this: “We don’t want every service to manually do new UserRepo() — that couples the service to a specific implementation and breaks testability.”

Also, in an architecture where all UserRepo-dependent route handlers are written in one or two files, implying a maximum of 2 UserRepo instances, there’s no reason for this approach. DI seems like a helper for especially modular architectures. Since this is a smaller app with 4 or 5 services maximum, where the level of modularity is arguably performative, I assume the author treated this codebase as a woodshed for technologies of interest, whether scale-appropriate or not.


I’m realizing this practice is native to class-based architecture. I’ve written full stack web apps without exporting a class once with occasional exceptione. Is there a function-based analog to DI, or is this sort of thing born out of unique disadvantages of working with classes in JS?


I’m trying to understand the flow of this app’s jobs layer. At least in dev, two separate services in the docker compose network besides the main API server (“worker” and “dispatcher”) run express servers that container.get() the BullMq job singletons declared in the Inversify container, then call startWorking() and startDispatching(), respectively, on each of those jobs. Questions:

  1. What is the architectural advantage of distributing the job queue across three hosts (api, worker, dispatcher), and how do they all communicate?
  2. Inversify exposes the jobs as singletons, but in three different docker services – so how do all services happen to be passing around the same instances?

the entrypoint for the “dispatcher” server:

import { Db } from '../db'; 
import { container } from '../inversify.config';
import { TYPES } from '../inversify.constants';
import { getAllJobs } from '../jobs';
import { JobServer } from '../server';
import { Logger } from '../util';

(async () => {
const db = container.get<Db>(TYPES.Db);
await db.init();

const logger = container.get<Logger>(TYPES.Logger);
const jobs = getAllJobs(container);
await Promise.all(jobs.map((job) => job.startDispatching()));

logger.info('job dispatcher started');

const server = container.get<JobServer>(TYPES.WorkerServer);
server.start(jobs);
})();

the entrypoint for the "worker" server:

import { Db } from '../db';
import { container } from '../inversify.config';
import { TYPES } from '../inversify.constants';
import { getAllJobs } from '../jobs';
import { JobServer } from '../server';
import { Logger } from '../util';

(async () => {
const db = container.get<Db>(TYPES.Db);
await db.init();

const logger = container.get<Logger>(TYPES.Logger);
const jobs = getAllJobs(container);
await Promise.all(jobs.map((job) => job.startWorking()));

logger.info('job worker started');

const server = container.get<JobServer>(TYPES.WorkerServer);
server.start(jobs);
})();

example API resolver (AuthResolver.ts) that enqueues a SendMail job… which is somehow passed to the worker and dispatcher server, or…?

@Mutation((returns) => types.SignupOutput)
async signup(@Arg('input') input: types.SignupInput, @Ctx() ctx: ResolverCtx): Promise<typeof types.SignupOutput> {
    const sessionUser = ctx.getSessionUser();
    if (sessionUser.isLoggedIn) {
    return types.ForbiddenError.of({ message: 'must be logged out' });
    }

    try {
    const user = await this.authService.signup(input.username, input.email, input.password);
    const mail = this.mailWriterService.writeConfirmationEmail(user);

    await this.sendMail.job.enqueue({ mail });
    this.persistLogin(ctx, user); 
    return types.User.of(user);
    } catch (e) {
    this.logger.error(could not signup: error=${e});
    if (e instanceof errors.ValidationError) {
        return types.ValidationError.of(e);
    } else {
        return types.UnknownError.of(e);
    }
    }
}

the SendMail job, which ultimately passes the mail data returned by MailWriterService to a Nodemailer SES transport and sends:

import { inject, injectable } from 'inversify';
import { Config } from '../../config';
import { TYPES } from '../../inversify.constants';
import { Job } from '../types';
import { sendMail, SendMailPayload } from './../processors';
import { BullMqJob } from './BullMqJob';

@injectable()
export class SendMail {
readonly job: Job<SendMailPayload>;
private readonly config: Config;

constructor(@inject(TYPES.Config) config: Config) {
    this.config = config;
    this.job = new BullMqJob('SEND_MAIL', sendMail, this.config, { attempts: 3 });
}
}

then in which docker container does the bullmq queue run? and does it relate to the redis container also featured in the docker compose network? i assume it does since redis is only used as a session store in fakeprod et al. (dev db is seeded with user data).


in the case of a job like SendMail where no intervalMs option is set, is dispatching (scheduling) more of just a formality?


so since there’s no formal redis setup, when an API resolver calls job.enqueue(), does bullmq just look for a redis process on port 6379 of the local network?


so when the worker and dispatcher servers run startWorking() and startDispatching() on all three jobs, they’re obviously not running any jobs, since none have been enqueued yet; they’re just preparing the Redis queue for ‘SEND_MAIL’, etc.?


So while api, dispatcher, and worker all touch different BullMqJob instances, they all just tell the redis queue what code to run and how often, indexed by name string.


wild - this is my first real exposure to app functionality distributed across multiple processes or servers.

as for the graphql api architecture – base-level db entity ops (repos), services which encapsulate repos + do other stuff like encryption, graphql resolvers with encapsulate services + do other stuff like communicate with job queue – is this typical?


In this app, user records have “confirmedAt” and “confirmationToken” fields, which are respectively populated and nulled upon email confirmation. But I can’t find any server-side logic that checks for confirmation status during login.

We have AuthResolver.ts:confirmEmail():

async confirmEmail(id: string, confirmationToken: string, confirmedAt: Date): Promise<void> {
    const user = await this.userRepo.find(id);
    if (!user) {
    throw new NotFoundError('user not found');
    }
    if (user.confirmedAt) {
    throw new BadRequestError('invalid confirmation token');
    }
    if (!user.confirmationToken) {
    throw new BadRequestError('invalid confirmation token');
    }
    if (!confirmationToken) {
    throw new BadRequestError('invalid confirmation token');
    }
    const confirmationTokensMatch = await this.tokensMatch(user.confirmationToken, confirmationToken);
    if (!confirmationTokensMatch) {
    throw new BadRequestError('invalid confirmation token');
    }
    
    // user.confirmationToken and user.confirmedAt are respectively populated and nulled; user is redirected to /library
    // on login, when is confirmedAt tested?
    await this.userRepo.update(user.id, { confirmationToken: null, confirmedAt });
}

then AuthResolver.ts:login():

@Mutation((returns) => types.LoginOutput) 
async login(@Arg('input') input: types.LoginInput, @Ctx() ctx: ResolverCtx): Promise<typeof types.LoginOutput> {
    const sessionUser = ctx.getSessionUser();
    if (sessionUser.isLoggedIn) {
    return types.ForbiddenError.of({ message: 'must be logged out' });
    }

    const user = await this.authService.getAuthenticatedUser(input.usernameOrEmail, input.password);
    if (!user) {
    return types.ForbiddenError.of({ message: 'wrong username, email, or password' });
    }

    this.persistLogin(ctx, user);
    return types.User.of(user);
}

which calls AuthService.ts:getAuthenticatedUser():

async getAuthenticatedUser(usernameOrEmail: string, password: string): Promise<User | null> {
    const user = await this.userRepo.findByUsernameOrEmail(usernameOrEmail);
    if (!user) {
    return null;
    }
    const isPassword = await bcrypt.compare(password, user.encryptedPassword);
    return isPassword ? user : null;
}

which finally calls UserRepo:findByUsernameOrEmail():

async findByUsernameOrEmail(usernameOrEmail: string): Promise<User | null> {
    const username = usernameOrEmail;
    const email = usernameOrEmail;
    const user = await this.em.findOne(UserEntity, { $or: [{ username }, { email }] });
    return user ? pojo(user) : null;
}

Could this check be happening on the frontend?


There are no matches for “confirmedAt” in the frontend codebase outside of the email confirmation flow itself – the only exception being its inclusion as a variable in auth graphql request bodies:

export const login = $gql
.mutation('login')
    .setQuery({
    ...t.union<LoginOutput>()({
    User: {
        __typename: t.constant('User'),
        id: t.string,
        email: t.string,
        username: t.string,
        role: t.optional.oneOf(UserRole)!,
        confirmedAt: t.string,
    },
    ForbiddenError: {
        __typename: t.constant('ForbiddenError'),
        message: t.string,
    },
    }),
})
.setVariables<{ input: LoginInput }>({
    input: {
    usernameOrEmail: t.string,
    password: t.string,
    },
})
.build();

I probably will, but the author and I are not in contact, so I’m wondering if it would be a diabolical PR to make.


is there a difference between microservices and “multi-service” architecture as seen here?