Diffusion models are generative models that generate images, videos, and audio through learning samples and have attracted attention in recent years. In this paper, we investigate whether a diffusion model is resistant to membership inference attacks, which evaluate the privacy leakage of a machine learning model. We primarily discuss the diffusion model from the standpoints of comparison with a generative adversarial network (GAN) as a conventional model and hyperparameters unique to the diffusion model, such as timesteps, sampling steps, and sampling variances. We conduct extensive experiments with the denoising diffusion implicit model (DDIM) as a diffusion model and the deep convolutional GAN (DCGAN) as a GAN on the CelebA and CIFAR-10 datasets in both white-box and black-box settings and then show that the diffusion model is comparable to GAN in terms of resistance to membership inference attacks. Next, we demonstrate that the impact of timesteps is significant and that the intermediate steps in a noise schedule the most vulnerable to the attack. We also found two key insights through further analysis. First, we identify that DDIM is more vulnerable to the attack when trained with fewer samples even though it achieves lower Frechet inception distance scores than DCGAN. Second, sampling steps in hyperparameters are important for resistance to the attack, whereas the impact of sampling variances is negligible.