This is a homework problem, but it isn't me looking for an easy way out. I've been thinking about this problem for a while now and even went to my professor's office hours and still don't quite understand it.
Question:
- Let $X \sim Exp(1)$ and $Y \sim Unif[0,1]$ be two independent random variables. Find the PDF of $|X-Y|$ by using convolution.
So, the very first thing I did was define $Z = |X-Y|$. Usually, when I deal with problems like this and want to find the PDF of a sum (difference), I find the CDF of $Z$ and then differentiate to get the PDF of $Z$. \begin{align*} F_Z(z) &= P(Z \leq z) = P(|X-Y| \leq z)\\ &= P(-z \leq |X-Y| \leq z)\\&= P(X-Y \leq z) - P(X-Y \leq -z)\\ &=F_{X-Y}(z) - F_{X-Y}(-z) \end{align*} So, this is the CDF of $Z=|X-Y|$. If i differentiate it, then the F's (CDF's) simply spit out f's (PDF's) and by the chain rule the second CDF would produce a negative. Altogether, this means: \begin{align*} f_Z(z) = f_{X-Y}(z) + f_{X-Y}(-z) \end{align*} Okay, now that's all fine and dandy, but now I want to explicitly write out the distribution functions in terms of how they are defined over particular intervals. I think this is where convolution comes into play. My game plan was to solve for the two separate pdf's by computing two convolutions.
Because $X$ is distributed exponentially, I know it must take on a positive value. Because $Y$ is uniform on $[0,1]$ I know we are only considering the values $[0,1]$ and not all positive values.
So, this is where I start getting a little "bewildered", haha. I have this written down so far: \begin{align*} &f_{X-Y}(z) &= \int_{-\infty}^{\infty}{f_X(x)f_Y(x-z)dx}\\ &f_{X-Y}(-z) &= \int_{-\infty}^{\infty}{f_X(x)f_Y(x+z)dx} \end{align*}
Does this set-up make sense? I obviously have to change the limits of integration but that is the part that confuses me the most. How can I derive the limits of integration on my own?
Thank you!