9
$\begingroup$

This is a homework problem, but it isn't me looking for an easy way out. I've been thinking about this problem for a while now and even went to my professor's office hours and still don't quite understand it.

Question:

  • Let $X \sim Exp(1)$ and $Y \sim Unif[0,1]$ be two independent random variables. Find the PDF of $|X-Y|$ by using convolution.

So, the very first thing I did was define $Z = |X-Y|$. Usually, when I deal with problems like this and want to find the PDF of a sum (difference), I find the CDF of $Z$ and then differentiate to get the PDF of $Z$. \begin{align*} F_Z(z) &= P(Z \leq z) = P(|X-Y| \leq z)\\ &= P(-z \leq |X-Y| \leq z)\\&= P(X-Y \leq z) - P(X-Y \leq -z)\\ &=F_{X-Y}(z) - F_{X-Y}(-z) \end{align*} So, this is the CDF of $Z=|X-Y|$. If i differentiate it, then the F's (CDF's) simply spit out f's (PDF's) and by the chain rule the second CDF would produce a negative. Altogether, this means: \begin{align*} f_Z(z) = f_{X-Y}(z) + f_{X-Y}(-z) \end{align*} Okay, now that's all fine and dandy, but now I want to explicitly write out the distribution functions in terms of how they are defined over particular intervals. I think this is where convolution comes into play. My game plan was to solve for the two separate pdf's by computing two convolutions.

Because $X$ is distributed exponentially, I know it must take on a positive value. Because $Y$ is uniform on $[0,1]$ I know we are only considering the values $[0,1]$ and not all positive values.

So, this is where I start getting a little "bewildered", haha. I have this written down so far: \begin{align*} &f_{X-Y}(z) &= \int_{-\infty}^{\infty}{f_X(x)f_Y(x-z)dx}\\ &f_{X-Y}(-z) &= \int_{-\infty}^{\infty}{f_X(x)f_Y(x+z)dx} \end{align*}

Does this set-up make sense? I obviously have to change the limits of integration but that is the part that confuses me the most. How can I derive the limits of integration on my own?

Thank you!

$\endgroup$

    1 Answer 1

    6
    $\begingroup$

    For the first, $X-Y$ we need $x\in [0,\infty)$ and $(x-z)\in[0,1]$ where $z\in[-1,\infty)$

    That's $-1\leq \max(0,z) \leq x\leq 1+z <\infty$, so the integration is: $$f_{X-Y}(z) = \mathbf 1_{z\in[-1,0)}\int\limits_0^{1+z}f_X(x)f_Y(x-z)\operatorname d x +\mathbf 1_{z\in[0,\infty)}\int\limits_z^{1+z}f_X(x)f_Y(x-z)\operatorname d x $$

    For the second, $Y-X$, we need $x\in [0,\infty)$ and $x+z\in[0,1]$ where $z\in (-\infty, 1]$

    That's $\max(0,-z) \leq x \leq 1-z < \infty$, so the integration is: $$f_{Y-X}(z) = \mathbf 1_{z\in[-\infty,0)}\int\limits_{-z}^{1-z}f_X(x)f_Y(x+z)\operatorname d x +\mathbf 1_{z\in[0,1]}\int\limits_0^{1-z}f_X(x)f_Y(x+z)\operatorname d x $$

    $\endgroup$
    1
    • $\begingroup$please write the most general rule for the pdf of the difference of two random variables.$\endgroup$CommentedFeb 7, 2024 at 4:37

    You must log in to answer this question.

    Start asking to get answers

    Find the answer to your question by asking.

    Ask question

    Explore related questions

    See similar questions with these tags.